Updating one file from another in SQL has been messy in the past, but this technique makes it a lot easier.
One of the big advantages of SQL is the fact that you can add columns to a table without worrying about the underlying access; columns that aren't used are simply ignored in the SQL client programs. However, once you have added that column, the next trick is to populate the column correctly.
If it's a new attribute that must be maintained by the user, you may have to insert some default values. You might need to set the value based on some information in other files (totals, let's say). Or sometimes you need a complex query just to create the data that you would then apply to the new field, which requires some careful and error-prone work, especially if you're only updating some of the records. Today, though, we take advantage of some new capabilities to avoid some of the mess.
Embracing Change
Actually, I'm going to show you two different advanced features: partitioning and merging. Partitioning is one of those OLAP-type functions that has been around for some time, but the MERGE command is brand-new to the IBM i as of version 7.1. So what are we trying to do here? Let's take a mind exercise: let's say I've got a file in which I want to store error messages. It's pretty simple; what's going to happen is that periodically I'm going to spin through my item master file and perform some edits. Clearly, these are things that could be handled in either the maintenance program or even a trigger program, but let's assume that for whatever reason there are cases when files go bad. A specific example here is that a cost center can be terminated, so any items pointing that cost center need to be changed. Should the maintenance program have handled it? Sure. Have I ever had a situation where a maintenance program didn't handle every foreign key? Why, yes…yes, I have. So enter the database audit program.
The program is designed to write to an error file created with a simple layout: item number, run time (the time the audit started), error message ID, and error message data. I'm a big fan of using IBM i message files to define error messages, and this is a fairly standard way of writing the data. (Note: sometimes I include the formatted message as well even though it's denormalized; having the formatted message in the file sometimes makes it easier for a human being just scanning the error log.)
So What's the Problem?
Well, the problem comes when I realize I don't have a unique key. Why not? Well, because theoretically I could have multiple messages for the same item. Heck, I could have multiple instances of the same message ID for the same item, especially if I'm being judicious with my error messages. I might, for instance, have a generic field required message to which I pass the name and description of the field. That way, I don't have to add a new message every time I need a new required field. But the problem comes when I want to get to a specific error message (maybe from a subfile inquiry).
I could figure out a unique key. Message ID and message data along with the item number and run time would almost certainly be unique, but that would be a large index for not much gain. I could add another timestamp that records the time to the millisecond when the error is written, but that's still overkill, I think. Instead, what I can do is simply add a unique sequence number for this record. I can use ALTER TABLE or CHGPF and in seconds I have a wonderful new field. But now how to populate it? Well, first let's figure out how to calculate the value. The file starts out with data from two runs. The first run on 05/21 has only one error, while the second on 05/25 has four error messages:
Item Number Audit Run Time MsgID Message Data
BND223G10 2015-05-21-15.30.12 RUN0010 Default W/C
BND223G10 2015-05-25-15.30.46 RUN0010 Default W/C
MIL001X01 2015-05-25-15.30.46 RUN0010 Default W/C
MIL001X01 2015-05-25-15.30.46 RUN0010 Primary UOM
SRV701A93 2015-05-25-15.30.46 RUN0010 Item Class
So you see that on 05/21 item MIL001X01 has two errors of the same kind. For two different fields, to be sure, but as noted above, unless I want to put the message data field into the key, I'm not going to be able to distinguish between the two. So I added the message sequence field, but the value for every record is zero; I still need to populate the field. To do so, I need a way to compute a running total. I use the first technique for today's article: the combination of ROW_NUMBER and the PARTITION BY clause.,
Here's the syntax:
select IEITEM, IERUNTIME, IEMSGID, rrn(ITEMERRORS) ierrn,
row_number() over (partition by IEITEM, IERUNTIME
order by rrn(ITEMERRORS)) Seq from ITEMERRORS
order by IEITEM, IERUNTIME, IEMSGID, Seq
Simple enough, and what we end up with is this:
Item Number Audit Run Time MsgID IERRN Seq
BND223G10 2015-05-21-15.30.12 RUN0010 1 1
BND223G10 2015-05-25-15.30.46 RUN0010 2 1
MIL001X01 2015-05-25-15.30.46 RUN0010 3 1
MIL001X01 2015-05-25-15.30.46 RUN0010 4 2
SRV701A93 2015-05-25-15.30.46 RUN0010 5 1
What happened here? Well, we did two things: we asked for a row number, and we asked for it to be partitioned. The ROW_NUMBER function is simple: it causes SQL to return a sequential number. It's the over() clause that does the magic. If I didn't put anything in the parentheses, we'd just see a number rising from 1 to however many rows were in the file. But instead we see that the number resets with every change in the partition fields IEITEM and IERUNTIME. Effectively, we have a counter for the records in each item/runtime combination. Now, though, we have to get that value into the new IEERRSEQ field. In olden times, we did this through an update statement with an embedded subselect. It could be done, but it was very tedious, very non-intuitive, and very error-prone. Let me show you the new way!
merge into ITEMERRORS
using (
select IEITEM, IERUNTIME, IEMSGID, rrn(ITEMERRORS) ierrn,
row_number() over (partition by IEITEM, IERUNTIME
order by rrn(ITEMERRORS)) Seq from ITEMERRORS
) as ies on (rrn(ITEMERRORS) = ies.ierrn)
when matched then update set IEERRSEQ = ies.seq
That's it! You'll see our previous SELECT statement in the middle, wrapped by the MERGE statement. The MERGE has a lot of moving parts and this is just a simple introduction, but let me go through them. First, the MERGE INTO identifies the file that is going to be updated. Next is the USING clause, which defines the data that will be used to update the table specified on the MERGE. This data needs a name so I call it ies, for Item Error Sequence. After the name is the comparison that we use to link the data in the subselect to the data in the MERGE clause. It's the same syntax as a JOIN … ON; specify the fields in the source file and the fields that they match in the target file. I'm cheating here a little bit; because the source and target files are the same, rather than worry about fields, I just use the relative record number. While this isn't an option for most cases, it sure works nicely when it is an option.
The last section is the WHEN MATCHED clause. Our case is very simple; we're updating all the records in the file based on data from all the records in the file, so every record is by definition a match. However, the MERGE statement is much more capable than that, with options not only for WHEN MATCHED but also for WHEN NOT MATCHED; we'll see much more about these many sequences as well as more about PARTITION BY in future articles.
LATEST COMMENTS
MC Press Online