SQL excels at set-at-a-time processing, and what better application than archiving your data?
No matter how much disk you have, you fill it. It's a corollary to Parkinson's Law, which posits that work expands to fill the time allotted. It's the same with databases; despite the ever-increasing amount of disk available (who could have imagined a 2TB disk for the home computer?), you will always run out, so at some point you will have to reduce the amount of data in your database. As it turns out, though, SQL can provide you with significant help.
The Simple Way: Get Rid of It!
The fastest way to winnow down your data is, of course, just to delete it. Let's take a simple case: order header and detail files, where the order header has a status code and a last-activity date. Clearing old data from this data model is pretty simple. I'm deleting orders with a status code of "C" and whose last activity was at least 30 days ago.
exec SQL delete from ORDHDR
where OHSTS = 'C' and HLACT < (TODAY()-30 days);
exec SQL delete from ORDDTL
where ODORD not in (select OHORD from ORDHDR);
Not much to it, is there? I do this in two steps: first, I delete the records from the primary file that match the deletion criteria, and then I delete the related records that are newly orphaned (obviously, older orphans would be deleted as well).
Reorganize the files using RGZPFM and you're done! Efficient, easy to implement, and not too hard to read. The only thing even mildly tricky is the DELETE ... WHERE NOT IN syntax, but even that isn't too complex: the query engine deletes only those detail records whose order number is not in the set of order numbers in the header.
In fact, the only real downfall to this technique is that the orders are deleted and lost forever (at least without some other sort of archiving mechanism). So, you get your database skinny, but is the cost too high?
The Better Way: Archiving
Let's assume that your backup data is in another library. Let's call that library BACKUP and then archive our data into it. This involves a couple of extra steps, and I want to show you both the right way and the wrong way to do it. Let's start with the order header file:
exec SQL insert into BACKUP/ORDHDR
(select * from ORDHER
where OHSTS = 'C' and HLACT < (TODAY()-30 days));
exec SQL delete from ORDHDR
where OHSTS = 'C' and HLACT < (TODAY()-30 days);
This looks good, doesn't it? The only problem is that under very specific circumstances you could lose data with this technique. Can you see how? Let me explain; if an order were to become eligible for deletion in the time between the INSERT and the DELETE, then it would get deleted without having been archived.
Now, that's unlikely in this particular instance, since one would hope that the act of marking the record closed would change the activity date to today's date, which would in turn mean the record would not be eligible to delete. But the idea is still the same: you shouldn't delete records based on information that could have changed since the archival step, no matter how close the two are to each other.
A second cautionary note: I left the ORDHDR unqualified when referring to the production data, and I qualified the backup file with the library name BACKUP. This would fail rather insidiously if BACKUP were actually in the library list higher than the production file library; you would be adding all the records from the archive file back to the archive file, effectively duplicating the data. Now there's a way to really chew up some disk space!
The Best Way: Deleting Using the Archive
We can steal a page from the earlier example to avoid this possible pitfall. First, we archive the data to be deleted, and then we use the WHERE EXISTS syntax to delete any records in production that exist in the archived file.
exec SQL insert into BACKUP/ORDHDR
(select * from PRODFILES/ORDHDR
where OHSTS = 'C' and HLACT < (TODAY()-30 days));
exec SQL delete from PRODFILES/ORDHDR
where OHORD in (select OHORD from BACKUP/ORDHDR);
Simple enough! First, copy the records from the production file to the backup file that match the criteria. Then, delete all records in the ORDHDR file that exist in the ORDHDR file in the BACKUP library. In these examples, I've included the qualification for the production files, which, according to the code above, reside in a library named PRODFILES.
Anyway, we can now apply the same principles to the detail file:
exec SQL insert into BACKUP/ORDDTL
(select * from PRODFILES/ORDDTL
where ODORD in (select OHORD from BACKUP/ORDHDR));
exec SQL delete from PRODFILES/ORDDTL
where ODORD in (select OHORD from BACKUP/ORDHDR);
Both the insert and the delete statements for the order detail records follow the same pattern: process only those detail records whose order header exists in the backup file. Since the order header backup file stays static, this will ensure that the production files don't get out of sync. To make absolutely certain that nothing can break the process, you can allocate the backup file using an exclusive lock to prevent any unexpected updates.
Leaving You with One Other Trick
I showed you how to insert data into one table from another table and then how to use a table to control the archival of related files. This is a great use of the insert technique, but others exist. Sometimes you just need a temporary file with a subset of the data in the larger file.
Let me show you how to create that temporary table. In fact, we'll create one right in QTEMP:
exec SQL create table QTEMP/ORDHDR like PRODFILES/ORDHDR;
Nothing to it! The CREATE TABLE ... LIKE syntax creates an exact duplicate of the existing table, down to the format level, but empty. Then you can populate the new table with data from the original, perhaps using the INSERT syntax we learned earlier in the article. You can use the new table anywhere the old table was used, including traditional RPG and COBOL programs.
So, enjoy this technique; I think it's one of the most realistic uses of set-based processing that you'll find. Whereas I rarely find myself updating every price in a file by 10 percent, I often need to move or copy part of a file to another file, and the examples in this article show you an easy way to use SQL to do exactly that.
LATEST COMMENTS
MC Press Online