21
Sat, Dec
3 New Articles

RPG Building Blocks

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

File Access via Service Programs

RPG III improved upon RPG II by allowing files to be defined externally. Now, RPG IV’s subprocedures improve on RPG III by allowing files to be used in a program without coding a single F-spec or a disk I/O op code. Through external subprocedures, all file I/O can be “externalized” so that programs remain immune to changes in the database layout.

If you are developing new software in RPG and modifying existing systems to be Year 2000 ready, then you’ll be only too familiar with the following situations:

• You have dozens of high-usage files, all with six-digit dates, and each of these files is accessed by hundreds of programs, all using suspect date logic.

• You don’t want to write one more line of code that has anything to do with six digits, other than for printing and displaying.

• You want to upgrade your older programs to use eight-digit or date type fields and deploy them right now.

• You can’t change a file because too many programs are using it.
• You can’t start using century-enabled dates in your programs because the file can’t be changed.

So, how do you get around all of these obstacles? Establishing an entirely separate development environment that is century capable has significant drawbacks. It involves duplicating all-new development and maintenance, testing exhaustively, and feeling nervous about the rapidly approaching cut-over day.

Additionally, after you’ve succeeded (if you have), your new system will be just as resistant to change as it was before.

Perhaps it’s time to approach file access from a fresh angle.

What’s the Problem?

The inability to change files or programs easily can be placed squarely on the shoulders of the ubiquitous F-spec. Every F-spec that exists for a file is one more reason why that file cannot be changed. The solution is simple—don’t ever code another one. For

the last two years, the organization for which I work has been adhering to the technique outlined in this article, and the benefits have been notable. Indeed, it was precisely because RPG IV allowed us to implement this technique so easily that we adopted this language as our development medium. You, too, may be able to start introducing Year 2000-enabled programs to your live database—maybe as soon as today.

The I/O Procedure

Start by imagining that there are no F-specs, key lists, I-specs, resulting indicators, or op codes such as Set Lower Limits (SETLL), READ, CHAIN, or UPDATE. You’re halfway there already. Now think of all file access as a process consisting of five variables:

1. Which file you want to access
2. What you want to do with the file
3. How you wish to subset the file
4. What data you want to save, retrieve, or lose
5. Whether you succeed or fail All of the things that I specifically asked you to forget (the F-specs, key lists, etc.) do nothing more than express these five variables—and these variables can all be modelled by what I shall refer to as the I/O procedure. The file that you want to access is embodied in the I/O procedure’s name; the success or failure of the access is represented by the value returned by the procedure (i.e., *ON or *OFF); and the access method, number of keys, and data that you are working with are passed as three standard parameters.

To show how this works, I’ll use the file PRODUCTS as an example. The DDS specifications of this file and logical file PRODUCTS2 are shown in Figures 1 and 2. (DDS for another logical file, PRODUCTS1, is not shown.)

Although this is a simple file, imagine that it is one of your high-use files and has hundreds of programs currently using it and you have been asked to do the following:

1. Change inception date (PRINDT) to an L-data type
2. Change PRCODE from 15 packed to 20 alpha
3. Extend Description from 30 alpha to 50 alpha
4. Add new field ‘Withdrawal Date’ (PRWDDT) as an L-data type Using I/O procedures, you can do this right now. First, I’ll show you how to write and use an I/O procedure that accesses the original PRODUCTS file (I’ll call this Version
0), and then I’ll go on to the I/O procedure that implements all of the changes you were requested to make (I’ll call this Version 1). You can find the code for these procedures at MC’s Website at http://www.midrangecomputing.com/mc/98/12.

Using IOPRV0

The I/O procedures that access the PRODUCTS file (physical and logicals) will all reside in a service program called IOPRV0. I prefix all I/O service programs with IO, followed by the unique two-letter code that I use when naming fields within that file. I then use the suffix Vx, where x is the version number.

Within this service program are three procedures, one for the physical file PRODUCTS and each of the two logicals. The procedure names are IOPR0, IOPR1 and IOPR2. A suffix of 0 indicates that the procedure is for the physical file, and any other suffix matches that of the logical it accesses. All I/O procedures have exactly the same prototype, so the prototype for IOPR0 (shown in Figure 3) is typical for any file.

The parameter IO_Meth allows you to specify how the file is accessed—that is, if you are reading, updating, deleting, etc. IO_Meth is a “pass by reference” parameter, meaning that a 10-alpha field must be initialized to one of the access constants listed in Figure 4. Literals and fields with different definitions are not allowed. For some access constants (e.g., SETLLNXT), the value of IO_Meth will be modified by the IO procedure
(e.g., SETLLNXT will become GETNXT). Passing IO_Meth by reference ensures these modifications are carried from one IO procedure call to the next.

The parameter IO_Keys lets you state how many key fields must match for a READ operation to succeed. Because IO_Keys is a “pass-by-value” parameter, literals, compatible field types, and expressions may be passed instead of 3-packed numerics.

The IO_File@ parameter is a pointer to a data structure. The data structure pointed to would normally be identical to the file being accessed, though it needn’t be. At some future time, the file will change, but the data structure will not. This independence of file and data structure allows you to make changes to a file and recompile the I/O procedure that accesses it without having to change any program that uses the I/O procedure. For this example, assume a data structure called PRV0 has been defined externally and has exactly the same field names and definitions as the file PRODUCTS. I’ll put all this together in a program that reads the PRODUCTS file, counting all records that have a class of ‘DEMO’ (see Figure 5).

This example highlights a number of points about using I/O procedures. The first /COPY copies in the I/O constants and must be in every program that uses these constants. The second /COPY copies in all of the I/O procedure prototypes for file PRODUCTS. This allows me to refer to IOPR0 and IOPR2 in the body of the code.

The D-specs for the data structure PR_DS are worth studying. The extname(PRV0) part tells the compiler that there is an externally defined data structure called PRV0 out on the system. I’ve renamed it PR_DS (I’ll explain why in a moment). As a result, the compiler sets aside a block of space within the example program that is the same size as PRV0. The next step is to take the address of where this space starts and save it in PR_DS@. It is this address that I pass as the third parameter to the I/O procedure. The I/O procedure’s job becomes clearer: Now that it knows where the caller stores its data, it can read and change the caller’s data structure directly. This avoids having to transfer entire records from I/O procedure to caller: In fact, the only performance impact is the call overhead itself.

Now, why did I rename PRV0 to PR_DS? Because, although I expect the PRODUCTS file to be around for a long time, I do expect it to change. At some point in the future, I’m going to want to use a later version of the I/O procedure, and I don’t want to change every line of code that uses it. When I switch over to IOPRV1, all I will need to change is the /COPY and the extname(PRV0) code. If existing field definitions within the changed file haven’t altered, no other code change is necessary. I can now restate the function of the I/O procedure itself: I/O procedures support data storage and retrieval via data structures. How and where this data is physically stored is no longer relevant to the caller.

Now, I’ll examine the C-specs. Within the block titled “Doing it the slow way,” I’ve loaded up field PRPRD# with *LOVAL, set IO_Meth to the constant SETLLNXT, then invoked a loop that repeatedly calls IOPR0 as long as it returns *ON. “What’s going on here?” you may ask. It’s time that I explained the I/O constants in depth.

All of the constants that begin with SET are set operations and require an initial value for all of the key fields to the file being accessed. For IOPR0, there is only one key field: PRPRD#. Set operations perform a SETLL or Set Greater Than (SETGT) operation before doing anything else. SETLL and SETGT perform the set only, and do nothing else. The other constants do an initial set, then change the IO_Meth value to either GETNXT or GETPRV, then finally perform the Get Next/Previous.

So, loading up IO_Meth with SETLLNXT and calling IOPR0 results in a SETLL performing, using the current value of PRPRD# (which I just loaded with *LOVAL), IO_Meth is then changed to GETNXT, and then a read for the next record is done. Finally, *ON or *OFF is returned, depending on the success of the read. The *ON indicates success, not error (the opposite of RPG resulting indicators). The loop will process the entire file because I specified 0 as the number of keys to match. When the loop is executed the second time, the value of IO_Meth will be GETNXT. GETNXT and GETPRV read from where the file cursor is currently positioned. The following points about I/O procedures should be noted:

• No operations leave a record locked .
• The file isn’t opened until the first access is performed (unless that first access is CLOF, in which case nothing happens). You don’t have to explicitly open the file before reading, updating, etc.

• UPDRCD and DLTRCD do not require a READ operation to be performed first. The values in the data structure are used to retrieve the record to be updated or deleted.

• Keys cannot be updated. While this was originally a design oversight, it hasn’t caused me any difficulty. This is because all the files that I work with are defined as unique, with the physical file keyed on an internally generated number (PRODUCTS is an example of this). If I wanted to update, say, PRCODE, I would update it using IOPR0, where PRCODE doesn’t feature as a key. But if I wanted to update the internally generated number (PRPRD#), I would have to delete then rewrite the record. This situation rarely arises.

• The service program IOPRV0 is created with an activation group of *CALLER. This means that all active programs sharing the same activation group and using IOPRV0 will be sharing the same access paths. If your programs run in activation groups of their own, this doesn’t present any problem. But if you’re running several programs per activation group, then you’ll have to make allowance for the fact that the file cursor may not be in the position in which you left it. To illustrate this point, imagine that PGMA performs a SETLL, using procedure IOPR0. Then, PGMA calls PGMB, which runs in the same activation group. If PGMB performs an operation using IOPR0, it will use the same access path as that currently used by PGMA. When control returns to PGMA, the file cursor will be where PGMB left it, possibly causing PGMA a problem.

• Multiple formats per file are not allowed.
• There is no file error trapping. The second block of code in the example program under the title “Doing it the fast way” demonstrates how you do keyed access. In order to count all the PRCLAS=‘DEMO’ records with the least number of reads, I take advantage of the fact that PRODUCTS2 is keyed initially on PRCLAS. By setting PRCLAS to ‘DEMO’ and the IO_Keys parameter to 1, IOPR2 will return *OFF as soon as it strikes a record that doesn’t have a value of ‘DEMO’.

Coding I/O Procedures

Now that you know how to use an I/O procedure, you’ll need to know how to create one. The code for IOPR2—that is, PRODUCTS keyed by CLAS, CODE, and PRD#—can be found at MC’s Web site. If you’re saying “What? I need to code one of these for every file on the system? Are you kidding?” you’ll be gratified to know that’s exactly what some of the programmers with whom I work said. But stop and think a bit. These are the only F-specs, key lists, and RPG op codes that exist on your entire system for that file. If you’d rather stick to accessing the file directly from programs, then you’re effectively recoding some or all of the above every time you write a program that uses the file. Sure, there’s some work to be done up front, but it needs to be done only once. And if you have programs that reference several logicals over the same file (common in interactive programs that allow different sequencing), you don’t have to introduce any new code to use them. The code for the I/O procedure is also quite generic, so copying existing logic to create a new procedure makes the job particularly easy (especially if you copy from a procedure with the same number of keys). You could write a source generator if the number of files on your system warrant it.

Fulfilling a Contract

I like to think of the I/O procedure as fulfilling a contract, the terms of which are to act on I/O access methods and guarantee support of the data structure that the I/O procedure was designed around. The fact that no conditions are placed on how these terms are met is what makes the I/O procedure so powerful. Procedures, by their very nature, are intelligent—files, by comparison, are extremely dumb. You can code anything you want

into procedures. Say you want all records on PRODUCTS to be deleted if the value of PRCLAS changes to INACT. Code this into your I/O procedure, and the job is done. Or maybe all records that are deleted should also be written to archive—no problem. Or perhaps whenever a new product is added, a corresponding sales record should be created—you get the picture.

It’s not so hard now to figure out how all of those changes that you were asked to make at the beginning of the article will be done. First, create a data structure called PRV1 with all the new fields and changes that were requested. Then create an extension file keyed by PRPRD# to hold the new field, Withdrawal Date. Create procedures IOPR0, IOPR1, and IOPR2 in new service program IOPRV1. These procedures will read or load PRV1, interact with the existing PRODUCTS file, the new extension file, and do all necessary date and alpha to numeric conversions.

Bear in mind that PRODUCTS is still the same as before, so all of those programs accessing it are unaffected. The important thing is that, once IOPRV1 is up and running, new programs can use it immediately and existing programs can be modified to use it according to your own pace. Eventually, all of your programs will be using IOPRV1 rather than directly accessing PRODUCTS. At that time, you can convert PRODUCTS, knowing that the only code that needs to be changed is that within the I/O procedures themselves. Once you get to this stage, any further changes to PRODUCTS are just an overnight job.

Generic I/O

One aspect of I/O procedures is the ability to do “generic” I/O. By this I mean that you can code logic to access a file without committing yourself specifically to a file name until runtime. You can do this because all I/O procedures have the same parameter list and you can take advantage of procedure pointers. Procedure pointers are pointers that hold the address of where procedure logic, rather than data, starts. The example of procedures pointers at MC’s Web site should clarify this concept.

The point of interest here is that a procedure called IO is being called, even though no such procedure actually exists. If you look at the prototype for IO, you’ll see the keyword extproc. This keyword allows you to specify an alternative name for a procedure. If the argument to extproc had been within quotes (e.g., ‘IOPR2’), then the internal name IO would have been just another name for procedure IOPR2. But if the argument to extproc is a field name, as it is here (IO@), the argument is assumed to be a procedure pointer. The actual definition of IO@ conforms to this assumption—you can see that it is defined as a pointer by the asterisk (*), but it is further qualified as a procedure pointer by the keyword procptr. Just what does this mean? It means that the procedure IO has no meaning until the value IO@ is set to the address of a real procedure. Once this happens, IO is just another name for that (real) procedure. The nice bit is that the value of IO@ can remain unresolved right up to the moment procedure IO is called.

This is particularly useful in interactive programs, where the information to be displayed to the user relies on filters and sequences that aren’t known until the program actually runs. Suppose the user wants to see all products in code sequence. All you need to do is take the address of IOPR1 (you take procedure addresses with the %paddr built-in function) and set IO@ to this value. The call to IO is really now a call to IOPR1. Now, the user wants to see products in class/code sequence. You change IO@ to the address of IOPR2, and the call to IO becomes a call to IOPR2. And you’re not restricted to different logicals of the same file. Instead of passing PR_DS@ as the data structure pointer, as I have done, you can just as easily pass the address of any data structure. In this way, the call to IO could potentially be any file in your system.

Considerations

The technique outlined in this article has been of tremendous benefit to the company I work for in transitioning its system written in ASSET (a CASE tool that generates RPG
III) to RPG IV. Nevertheless, bear in mind that it remains a tailored solution; in other

words, it was designed to answer problems that were significant to the company, but may

not be so at other sites. The simple expedient of recompiling all programs that access a file isn’t available to our company, but it is commonplace elsewhere. Similarly, many of the restrictions that I outlined are not necessarily ones that you need to accept; they remain as restrictions simply because no need has arisen for us to remove them. Specifically, the restrictions are as follows:

• If record locking is required, the standard set of I/O constants (SETLLNXT, UPDRCD, etc.) can be extended, and the necessary logic can be coded into the I/O routine. (None of our procedures locks records.)

• If keys need to be updated, the same approach as previously outlined can be used. The UPDRCD operation can’t update keys because it is a combined ‘read and update’; the read uses the current values of the IO data structure to locate the record to update and then updates the record with all of the IO data structure values. To update keys, you need to have separate READ and UPDATE operations.

• If programs must have unique access paths, they need to run in unique activation groups. This may lead to a proliferation of activation groups with severe performance implications. Our experience has been that, as more and more common business functions are off-loaded to ILE service programs, the need for programs to access files is reduced. Your situation may be different.

Additionally, our database is viewed as a component of the software, not as a general reference for user queries or other external systems. This aspect has to be considered when deciding to use I/O procedures. Rapid database change is a function of the number of direct references to that database. The aim of I/O procedures is to get that number down to one reference per file; if you have a large number of queries, this aim could be so compromised that IO procedures deliver little or no benefit.

Finally, I/O procedures don’t necessarily reduce the amount of work that needs to done in order to make database changes; in some cases, they add more. The key is that they eliminate the need for all of that work to be done at once.

* File Name: PRODUCTS

* File Description: Products by PRD#

*

A UNIQUE

A R PRODUCTSR

A PRPRD# 7P 0 TEXT('Internal Product #')

A PRCODE 15P 0 TEXT('Product Code'')

A PRCLAS 5A TEXT('Product Class')

A PRDESC 30A TEXT('Description')

A PRINDT 6P 0 TEXT('Inception Date')

A K PRPRD#

Figure 1: DDS for the PRODUCTS file

* File Name: PRODUCTS2

* File Description: Products by CLAS,CODE,PRD#

*

A R PRODUCTSR PFILE(PRODUCTS)

A K PRCLAS

A K PRCODE

A K PRPRD#

Figure 2: DDS for logical file PRODUCTS2

*=================================================

* PRODUCTS by PRD#

*

D IOPR0 PR 1A

D IO_Meth 10A

D IO_Keys 3P 0 value

D IO_File@ * const

*

*=================================================

Figure 3: This prototype is typical of all files accessed by server programs

* IO Access Constants - reside in file QCPYSRC(IO_CONSTS)

*==============================================================

D CLOF C const('CLOF')

D DLTRCD C const('DLTRCD')

D GETNXT C const('GETNXT')

D GETPRV C const('GETPRV')

D OPNF C const('OPNF')

D SETGTNXT C const('SETGTNXT')

D SETGTPRV C const('SETGTPRV')

D SETLLNXT C const('SETLLNXT')

D SETLLPRV C const('SETLLPRV')

D SETLL C const('SETLL')

D SETGT C const('SETGT')

D UPDRCD C const('UPDRCD')

D WRTRCD C const('WRTRCD')

*=========================================================

Figure 4: The IO method parameter must be initialized to one of these data access values

D EXAMPLE1 PR

*

/COPY QCPYSRC,IO_CONSTS

/COPY QCPYSRC,IOPRV0

*

D IO_Meth S 10A

D Count S 7P 0 inz(0)

*

D PR_DS E DS extname(PRV0)

D PR_DS@ S * inz(%addr(PR_DS))

*

D EXAMPLE1 PI

*

* Doing it the slow way:

C eval PRPRD# = *LOVAL

C eval IO_Meth = SETLLNXT

C dow IOPR0(IO_Meth:0:PR_DS@) = *ON

C if PRCLAS = 'DEMO'

C eval Count = Count + 1

C endif

C enddo

*

* Doing it the fast way:

C eval Count = 0

C eval PRCLAS = 'DEMO'

C eval PRCODE = *LOVAL

C eval PRPRD# = *LOVAL

C eval IO_Meth = SETLLNXT

C dow IOPR2(IO_Meth:1:PR_DS@) = *ON

C eval Count = Count + 1

C enddo

*

C eval *INLR = *ON

C return

Figure 5: Using an I/O procedure to read selected records

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: