Practical Array Processing: Dynamic Arrays

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Another trick with arrays is sizing them. This article shows you how to size your arrays dynamically.

 

In an earlier article, I showed you how to initialize arrays and how to sort them based on a subfield. I did this with fairly small arrays, where you could easily define the values for the array in your D-specs. The next trick is trying load arrays from disk. You don't know how many you will load, and you really don't want to allocate the memory for the entire array.

Expanding on the Original Store-Sorting Program

The original program had a few stores hardcoded into the D-specs as initialized variables. In this version of the program, I'm going to create an array of 1000 stores and then load that array from disk.

 

A    H OPTION(*NODEBUGIO:*SRCSTMT)


These are just a couple of standard H-spec keywords that I use all the time. They both make it a little easier to debug programs.

 

B    FSTORES    IF   E             DISK    RENAME(STORES:RSTORES)


Here is the file specification for the STORES database file. I created it using DDL, so the record format name is the same as the file name; RPG requires us to rename the format.

 

C    d cStores         ds                  based(pStores)

C    d  aStores                            dim(1000)

C    d   aStoreID                     3    overlay(aStores)

C    d   aStoreCity                  20    overlay(aStores:*NEXT)

C    d   aStoreState                  2    overlay(aStores:*NEXT)

C    d   aStoreZip                    5    overlay(aStores:*NEXT)

C    d pStores         s               *

C    d nStores         s              5u 0

C    d x               s              5u 0

 

This is essentially the same code as from the previous program, except without the hard-coded values. I've defined a data structure and then an array of data structures within the data structures. I've done this (using the overlay *NEXT syntax) in order to be able to sort by subfields. The only problem here is the limitation of 65535 for the total size of the array.

 

One other thing you should note is that the data structure is defined as "based," using the keyword based(pStores). This means that the array has no memory assigned; instead, I need to create memory for it. At the end of the array definition, you see that I defined a pointer variable named pStores. Technically, I don't need to define the variable--the based keyword does that--but adding the additional definition line doesn't hurt. Either way, the pStores pointer is at an unknown state, probably null, when the program begins. Whatever is in it, it doesn't point to my memory, so the array is unusable. Before using the array, I need to allocate some memory.

 

      /free

D      setll *start STORES;

D      nStores = 0;

D      read STORES;

D      dow not %eof(STORES);

D        if HasDeli = 'Y';

D          nStores += 1;

D        endif;

D        read STORES;

D      enddo;

D      if nStores > 0;


Next, I check to see how many records I actually need. In this case, I simply loop through the file, looking for records that match. With SQL, it would have been much easier; I could have used a single exec sql statement to set the value of nStores to the count of records that matched my criteria. I hope to be able to show you that and a few other SQL techniques in the next installment of array processing.

 

E        pStores = %alloc( nStores * 30);

E        setll *start STORES;

E        x = 0;

E        read STORES;

E        dow not %eof(STORES);

E          if HasDeli = 'Y';

E            x += 1;

E            aStoreID(x) = StoreID;

E            aStoreCity(x) = StoreCity;

E            aStoreState(x) = StoreState;

E            aStoreZip(x) = StoreZip;

E          endif;

E          read STORES;

E        enddo;


Now for the meat of today's exercise. This section of code is quite simple: I read each record, and if it matches the selection criteria, I add the record to the array. In this case, I'm directly moving database fields from the external file to the subarrays one at a time.

 

It's a little more complex, though. As I observed earlier, the data structure actually doesn't have any memory yet, so neither do the arrays; if you were to use it, you would get some serious errors. They might not even show up immediately as errors; instead, you'd notice strange problems that you couldn't readily diagnose; this is one of the symptoms of memory corruption.

 

Note the first line of this section, which sets the value of pStores. The %alloc built-in function (BIF) will allocate the amount of memory you ask for and return a pointer. In this case, I'm asking for 30 characters per record found. The number of records found is in nStores, so the logical step is to allocate that number times 30--nothing fancy, and now pStores has a value.

 

Now I can read data into the array. I read a record and once again test for the selection criteria (that HasDeli is a 'Y'). If it passes, I increment the index and add the fields from the data record to the subarrays. There's a subtle opportunity for error here, though. If a new matching record was added between the time I computed the number of matching records and the time I began loading the array, I would go past the end of the array. I don't show the code for that here, because what you might end up doing is simply going back and recomputing the count and reattempting the load. Or you could simply ignore any additional records and have an incomplete list.

 

F        x = %lookup( '002': aStoreID: 1: nStores );

F        sorta %subarr( aStoreCity: 1: nStores);

F        sorta %subarr( aStoreState: 1: nStores);

F        sorta %subarr( aStoreZip: 1: nStores);

F        sorta %subarr( aStoreID: 1: nStores);


The primary difference now is how you process the arrays. When doing a lookup, it's relatively straightforward: you add two new parms that identify the starting element and the number of elements that you want to include in the lookup.

 

The sorta opcode is a little different. The sorta allows you to use a relatively new BIF called %subarr, which allows you to define a subsection of an array, or "subarray." The subarray can then be sorted just like any other array. The %subarr BIF isn't implemented everywhere; you can't, for example, use it in a %lookup BIF. I would have liked to see %subarr extended there to allow a consistent syntax for all subarray processing. No matter; use the appropriate syntax in the appropriate place and you're fine.

 

D      endif;

 

This is the endif to avoid trying to process an empty set.

 

G      *inlr = *on;

      /end-free

 

And this is how you get out. You may have noticed that I didn't execute a %dealloc BIF. That's because there is no %dealloc BIF. There is a dealloc opcode, but that's a little different. I don't like the fact that I use a BIF in one place and an opcode in the other. You'll have to make your own call. Any allocated memory is released when the activation group ends, so if you're using activation groups for proper housekeeping, you may choose to go that route as well.

 

That's it for dynamic arrays. The last part of this mini-series will be to implement dynamic arrays in conjunction with embedded SQL. Until then, keep coding!

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  •  

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: