Journals are what allow for commitment control on the AS/400 platform. Many legacy applications that come from the System 36 were not written to use journaling and have never been rewritten to take advantage of commitment control. Also, journaling takes disk resources and can impact system performance if an AS/400 is already running near IO capacity, so a lot of shops will not allow journaling to be added to their physical files.
In the SQL worldwhich is also the Web world, as SQL is the most prevalent way of talking to databases from Web applicationscommitment control is expected. Most Web application designers would balk at not having commitment control. Well, guess what: Designing Web applications that talk to legacy nonjournaled physical files require special consideration. Lets review commitment control and take a look at a technique to minimize exposure to potential problems.
A Review for the Uncommitted
For those who may not be familiar with commitment control, let me review. If you have a physical file on the AS/400 and associate the physical file with a journal, you can now design applications that use commitment control. If an application uses commitment control, it means all entries/changes/deletions written to any journaled files will have the changes first recorded in the journal and then in the physical file. In order to make the changes permanent, the programmer calls commit. Commit causes the changes to the physical file to be considered complete. If, during processing of a program, the programmer decides to undo his changes to the physical file(s), he can issue a rollback. The rollback is like a big AS/400 undo button that will back out all changes to the physical files up until the last commit or rollback. In addition, if an application under commitment control abends, any changes that the programmer had not explicitly committed will be removed automatically from the physical files. Commitment control is like quality assurance for your data. A properly designed system with commitment control will never have half-entered invoices or dangling parent-child relationships because of program or system failure.
Most RPG and COBOL AS/400 legacy applications were written with the idea that the code could not fail. This belief is the result of the AS/400 being such a robust platform, the fact that those systems are all hosted on a single, integrated platform, and the fact that multiple database writes from terminal emulation sessions are usually batched during a single program process. The terminal data was probably good, because the programmer wrote code to ensure that any parts added to orders existed in the parts table, that quantities were positive numbers, etc. The programmer also had a lot of control over the order in which operations were performed. He could ensure that the user performed step A, then B, and finally C.
Its a Web Wide World
In the Web world, you do not have control over the order of operations that the client initiates. Also, clients can bookmark pages and jump past steps you might want them to perform. In addition, you are now running from multiple platforms and adding points of failure to your system. What happens if the TCP/IP link goes down during a transaction? What if the Web server crashes during order posting? These were not questions you had to answer when writing legacy applications, but they are things that you have to deal with under the Web and client/server models. What do you do?
Designing for No Commitment Control
For those who cant have commitment control, all is not lost, as you can design your interfaces to your legacy system to minimize the fact that you have no commitment control. It takes a little forethought, but the process is not that hard. I just interfaced with a system where commitment control was not an option. In this article, I will review the design, discuss the failure points, and then show you how to minimize them.
My victim application is a legacy warehouse/distribution application that has over 1,000 physical and logical files and 2 million lines of RPG III code. I was tasked to write a Web-based order entry program. Im concerned with the following files: order header, order detail, and warehouse item master. There is batch-oriented code for updating the accounting and other tables, but some quantity information must be adjusted at the time an order is placed into the system. The legacy terminal application allows a user to define an order, add items to the order, and then post the order. During posting, the order header is written, and then each item on the order is posted to the order detail table. As each detail item is posted, certain quantity values in the warehouse item master table are debited and credited.
When designing Web interfaces, you face a lot of problems. Web applications are usually multiplatform. The Web app may be running on a Linux box, Windows 2000, or AIX while the data is on the AS/400. Even if the Web app is on the AS/400, it is not immune to failure. The Web application may be running in Java or using technology such as Active Server Pages (ASP) and communicating via SQL or record-level access. There can be inadvertent bugs in the application code that cause the app to crash, unexpected record locking issues that cause it to halt, or communication link failures that cause a half order to be written. These are all bad news. Remember that these Web apps are new and not time-tested like the legacy system. You are putting more moving parts between your data and your user. More moving parts create more points of failure. Alas, if you had commitment control, these dangers would be slightly less of an issue. But that is a perfect world, and this is not.
The First Design of the Web App
The first design of the Web order entry system that interfaces with the legacy system stores the parts a user is selecting in a physical file on the AS/400. Because Web applications have no inherent state; items that a user selects must be written in a physical file. As the user selects items and specifies quantities, the items are written to the table under that users
ID, but no quantity adjustments are made, (just like the legacy system). When the user finally is finished playing with his order, either today or next week, he presses a button to initiate order processing. The Web code validates each part price and ensures that the part is still available. It then writes the order header record to the target physical file. The code proceeds to write each item detail to its respective file and adjusts the warehouse item master quantities via SQL statements. The Web app then deletes all of the parts from the physical file used to keep order state, as those parts are now part of an order. If this code completes, I have a good order posted to the host system and am ready for the user to do another.
However, if there is a failure at any time during this critical processing, there is no way to undo the order. What if the router goes down during the writing of the detail records? What if the server runs out of memory and halts? What if the order is posted but the state table is not cleaned up and the user processes the order again? I have no control in this situation and am relying on the byte gods to ensure that the data gets where it is supposed to go. This is bad design. This is the critical area of processing that has multiple modes of failure waiting to swoop down and wreck my evening.
To protect yourself in your application design, you need to identify these critical failure portions of your Web application. Then you have to look at how to minimize the communications required to perform the operations. What you want to do is have one point of call to process the order so that, once that call is made, you are sure that either the order is posted completely or nothing happens at all.
Minimizing Failure
Here is an outline of the approach I used to minimizing failure within the posting process. The biggest point of failure is the writing of the order records to the physical files and updating the quantities. The approach I took was to write to empty copies of the physical files in QTEMP and then call a stored procedure for final processing. I made these QTEMP targets using the Copy File (CPYF) command. It doesnt matter if you are using Java Database Connectivity (JDBC), ODBC, or ActiveX Data Object (ADO): You can always execute any AS/400 command as if it were a stored procedure. By executing the following command I cause an empty copy of the order header file to be written to QTEMP.
CPYF FROMFILE(ORDMST/ORDHDR) +
TOFILE(QTEMP/ORDHDR) +
CRTFILE(*YES)
This command would be followed by more commands that create empty copies of the order detail file and other targets of insert operations in QTEMP. After making the empty copies, the Web application posts the order header and detail records to the QTEMP copies using the same INSERT statements it would have used to write to the target physical files. Finally, the Web application program calls a stored procedure on the AS/400. This stored procedure could be written in CL, RPG, or COBOL. In my application, it is in CL but the CL program also calls an RPG program. The RPG program updates the quantities on hand in the master warehouse item table by reading the quantities ordered from the files in QTEMP. Because the stored procedure runs in the same job as the Web program, it can see all QTEMP files that the caller may have created. Then the RPG program calls the legacy program to retrieve an order number and updates the QTEMP records to have that order number. Control returns to the CL program which then uses CPYF to copy the data from the QTEMP physical files into the target physical files in the production system. Finally, the CL program calls another RPG program that removes the state records from the state table so that the user is in a clean, new order. Once the procedure is finished, it returns the order number that was issued to the calling program.
The beauty of this approach is that the only point of critical failure now resides on the AS/400. Using this method, it doesnt matter if my application fails during the writing
of the temporary files. The AS/400 will automatically delete the temporary files because they are in QTEMP. Suppose the application call writes all of the temporary files and calls the stored procedure but fails before the procedure completes and returns control to the calling program. Big whoops! The procedure will complete and then attempt to return data to the calling program. The data return will fail, but the results of the procedure are still complete.
Living without commitment control is like living without coffee or Jolt cola. It can be done; it just takes some adjustments to your mode of thinking.
LATEST COMMENTS
MC Press Online