Wed, May
2 New Articles

Architecting for Change--The Message-Based Server

  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

If you're connected to the Internet, you're probably thinking about a Web-based application. You can develop a variety of applications, ranging from simple static "brochure-ware" to complex applications integrated with your existing legacy systems. I've written in the past about the various application architectures you can employ to design the more sophisticated applications, but this article will focus on something a little more low-level: the middleware used to communicate between the client application and the host.

This article is a guide to designing Web applications using a message-based architecture. I'll introduce the concepts of clients and servers, as well as the various communications methods, and then go into detail on the merits of message-based processing.

Clients and Servers

If you need distributed applications, then regardless of the type of application, there are two fundamental pieces to the puzzle: the client and the server. Where these two entities reside is really not important; what's far more important is how they communicate with one another.

A client is a program that does one of two things: It requests data, or it requests that an action be performed, usually on the database. While there are many variations on this theme, they all fall into these basic categories. I tend to use the following broad categories:

1. QUERY: Return a set of data based on input parameters
2. CRUD: Create, Read, Update, or Delete records in the database
3. REPORT: Initiate a batch process whose results will be returned as a document

You can write a wide variety of applications using just these basic requests. If the architecture is properly designed, the clients can be green-screen applications called up from a 5250 display, thick clients running on locally attached workstations, or browser-based applications using servlets and/or Java Server Pages (JSPs) running anywhere on the Internet.

Also, the more independent your clients are, the more independent your servers are. Servers can be as specific as an ODBC interface or as flexible as an XML document processor. The ODBC interface is easy to implement and usually performs well, but you'll pay the price of being able to support only ODBC requests. An XML processor, on the other hand, can be almost limitless in the type of requests it supports, but the price is quite a bit of overhead and up-front design. This article describes an approach that strikes a balance between the power of XML and the ease of use of SQL.


Many communication techniques exist today--from screen-scraping the 5250 display, to ODBC, to HTML, to XML. You can call programs or invoke stored procedures. Each technique has advantages and disadvantages.

  • 5250 screen scrapers use a 5250 emulator to read the 5250 display stream and to enter data back as if the user were keying it at the display. This technique is most useful with legacy systems, because it usually requires no change to the existing programs. The drawback is that it requires a 5250 display session and, in most cases, is penalized by the interactive tax of the iSeries.
  • ODBC is the communications technology underlying embedded SQL and JDBC. In essence, the syntax of an SQL statement is compiled into a program (or generated on the fly) and then sent to an ODBC interface, which executes it. The primary benefit of this technique is the standardized nature of the SQL syntax and the fact that most database providers (including IBM) are working hard to increase the performance of their ODBC interfaces. The biggest weakness of the pure ODBC approach is that the syntax is tied directly to the physical database layout, and thus not only are your clients bound to your server's database, but more importantly, your database layout is captive to your clients.
  • HTML was designed solely to communicate with human users. It is focused on rendering, not on content, and so is unsuitable for use as a peer-to-peer communications medium.
  • XML, on the other hand, was designed from the ground up to be used as a content-aware data communication format. There is a hefty learning curve, a large start-up cost in terms of defining your messages, and significant overhead in processing time. Despite that, in a resource-rich environment, XML is probably the most robust and flexible of communications techniques.
  • Program calls and stored procedures are similar in that they are designed to accept a set of parameters and return a result. In practice, they separate the client and the database almost as well as a complete message-based architecture, with less of the up-front overhead. Their only real disadvantage is that they are less flexible than a message-based system when the actual interface changes.

Message-Based Processing

This article focuses on a specific communication method: message-based processing. Message-based processing has been around for a long time, and it has some specific shortcomings that may make you wonder why I recommend it. Part of the reason is that, since it has been around so long, all of the issues have been worked out in one way or another. But more importantly, a specific feature that is exclusive to message-based processing makes it uniquely suited for the fast-paced world of Web-enabled software: It can support older and newer clients simultaneously.

The Impact of Change

When I was designing architectures for System Software Associates, a term that was constantly used whenever enhancements or fixes were required was "impact analysis." An impact analysis determined what programs had to be changed for a given modification. Modifications to files were always high-impact, and central master files were so widespread in their use that enhancements were sometimes designed specifically to avoid changing a master file. Anyone familiar with how we implemented multiple facility processing will know what I mean--rather than change a key field in several files from warehouse to facility, we instead added a cross-reference file that we used for processing everything. It caused a lot of unnecessary code and added processing overhead, but it avoided a change to several master files. These are the kinds of decisions you must make when your systems are insufficiently insulated from change. And remember, this was in an environment where we had total control over all the programs.

In the brave new world of distributed processing, things are even more difficult. The client programs usually run on workstations, making it difficult to keep them up-to-date when the interface changes. If you have a large PC user base, you probably already know how difficult it is to keep the PCs up-to-date, even for something as critical as virus protection. This is doubly the case for applications, because users often figure that if it works, why fix it? They may not know that subtle changes to the database have caused their version to be as dangerous as any virus. If your programs are run on an intranet, you may have some degree of control over them--in fact, by using a mapped drive you can actually centrally locate your applications--but unless you have those procedures in place, you're running the risk of having programs disrupt your database integrity whenever you change your business logic. If you're lucky, the programs will fail when the interface changes (if you're not, they'll run, but they'll give false results or even corrupt your database). If your application is run remotely over the Internet, the problem grows exponentially.

Insulation from Change

Insulation from change is the primary characteristic of a message-based architecture. Any other form of direct access, either through program calls or direct database access, exposes clients to changes in the host software. This is especially true with data-centric techniques such as ODBC. For example, if you change the format of your date fields, every program that accesses those fields will have to change.

This is not the case in a message-based architecture. Instead, each interaction between a client and a server is cataloged as a request and a corresponding response. The layout of the request and response may or may not have anything to do with the actual physical layout of the data. (It should be noted that for ease of setup, the first generation of messages may at least be very similar to the database files, but that's not necessarily a bad thing, as I'll show later.)

I'll start introducing my little application now. The objective is to have the ability to retrieve an employee's name, age, and number of years worked. The database is straightforward. I've depicted it in the table below.

Employee ID
Employee Name
Date of Birth
8S0 (CYMD)
Hire Date
8S0 (CYMD)

Time to see how it works in a client/server environment.

How It Works

In this section, I'll take you through the same exercise using SQL and using a message-based approach. First, I'll solve the original problem. Then, I'll respond to some different business requirements. As I continue on, you can compare the ease with which each approach allows you to keep up with changing business demands.

Original SQL

Well, getting the name is simple enough. But as soon as I began work on this example, I found that the syntax for extracting the age from a CYMD field was a little bit complex, as I've shown in Figure 1.

select year(curdate() –
       date(substr(char(empdob),1,4) || '-' ||  
            substr(char(empdob),5,2) || '-' ||      
from empmst                                          

Figure 1: Use this SQL to extract the employee's age.

The SQL literate among you will argue that I should have used DATE fields. That's not an option with legacy systems, but for the sake of argument, I'm going to change my database. EMPMS2 will contain the same data, but the two date fields will be stored as DATE fields. Please note, though, that now my database layout is being dictated by my clients. That is, I have to make database design decisions based on how they affect my client programs, rather than on efficiency, cost, or other business reasons. This is what we're trying to avoid, and it's one of the problems with a rigid interface such as ODBC.

Anyway, I've changed my database, so now my client program can proceed with a simple extract. Using RPG and embedded SQL, the syntax would be something like what is shown in Figure 2.

select empnam, year(curdate()-empdob), year(curdate()-emphir)
from empms2 into :name, :age, :yearsonjob where empid = :id

Figure 2: This SQL extracts the required information from our new database.

A very important point is that the names used in the request are specifically those in the database. SQL requires that the file and field names match the ones in the database. Since this code is in the client (and is in fact in every client), anytime the database changes, all clients must change. While this shouldn't happen often, when it does, it can cause real headaches.

Original Message-Based Process

How does this compare with the work required for a message-based request? Well, to start with, I have to define a request, a response, and a server. My request and response will be quite simple, as shown in the table below.

Employee ID
Employee Name

Employee Age

Years on Job

I create two data structures, one for the request and one for the response. I populate the data structure for the request and pass it to the server. I receive back the data structure containing the response. This would work just fine, but if this was the limit of the design, I would have to have one server program for every request, and I would have to know the name of every server.

Instead, I'm going to introduce the concept of a request dispatcher. This is the central idea of a message-based architecture: The data structures that hold the request and response are actually part of a larger data structure, one that can be used to handle any request. The basics are shown in the following table.

Client ID
Server ID
Request Code
Return Code
Message Data

The client ID is assigned to the client when it starts up. The server ID tells the dispatcher which server to call, while the request code identifies the contents of the message data. For example, a request of '01' might retrieve the employee data I detailed above, while a request of '02' might update the data. The return code identifies at a high level whether the request was successful or not. This is a bidirectional parameter: The message data contains the request when sent to the server, and it contains the response when returned to the client.

How the request gets to the server and the response gets back is irrelevant at this point. To keep the focus on the design, I'm going to use a simple dispatcher program: The client program calls the dispatcher, which calls the appropriate server program. You may notice several problems with this approach, primarily the fact that it limits the size of the data. I'm trying to keep the scope of the topic within a single article--a more complete design supports an arbitrary number of message segments in either direction, each with its own type. This is relatively easy to accomplish using a mechanism such as data queues, but that's too much detail for this article. I'll leave that portion as an exercise for the reader. Something even more interesting is that a request can be routed to another machine--on an entirely different platform, if necessary. But again, that's a different story for a different day.

Instead, it's time to write the server program. I know, I know, the SQL version is already up and working and installed on 20 PCs by now. But bear with me. The program is very simple, as shown in Figure 3.

FEMPMS2    IF   E           K DISK      
D Request         DS           256       
D   EmpID                       10     
D Response        DS           256       
D   EmpName                     50        
D   EmpAge                       3  0     
D   EmpYrsOnJob                  2  0      
D XIRequestID     S              2          
D XIReturnCode    S              2          
D XIMessage       S            256           
D Today           S               D   INZ(*SYS)   
C     *ENTRY        PLIST                         
C                   PARM                    XIRequestID     
C                   PARM                    XIReturnCode   
C                   PARM                    XIMessage      
C                   eval      Request = XIMessage
C     EmpID         CHAIN     EMPMS2
C                   if        not %found(EMPMS2)     
C                   eval      XIReturnCode = '01'
C                   else                              
C                   eval      EmpName = EMPNAM
C     Today         SUBDUR    EMPDOB        EmpAge:*Y    
C     Today         SUBDUR    EMPHIR        EmpYrsOnJob:*Y
C                   eval      XIMessage = Response
C                   eval      XIReturnCode = '00'
C                   endif              
C                   eval      *INLR = *ON       

Figure 3: This is the server program for the employee information request.

I also have to write the dispatcher, but that's a one-time cost, and it's even simpler than the server program. Even as dispatching gets more complex, it's important to remember that the dispatcher is a one-time cost: Write it once, and it works for every request. The servers are where the real work is done, and this is where the benefits of message-based programming begin to become apparent.

Business Scenario 1: A Calculation Changes

This is a simple change. The company uses the years on the job to determine certain benefits, and it's been decided that an employee should get credit for a full year after being on board six months. For example, my hire date is 10/31/2000, so my calculated years on the job should be three, rather than the two that the normal calculation returns. For the SQL, the change is relatively simple (although it took me a little while to find it), and is shown in Figure 4.

select empnam, year(curdate()-empdob),
year((curdate() + 6 months)-emphir)
from empms2 into :name, :age, :yearsonjob where empid = :id

Figure 4: This SQL is required to implement the new calculation.

Now, I have to change the calculation in every client program that calculates the number of years on the job. See Figure 5.

C                   SUBDUR    6:*M          EMPHIR
C     Today         SUBDUR    EMPHIR        EmpYrsOnJob:*Y  

Figure 5: Here's the modification required to implement the same change in a message-based architecture.

What's the corresponding change in the message-based approach? Well, we have to change the server program. I added one line, as shown in Figure 5. Now to the clients. I have to change...nothing! Not a single client changes, because the code is localized in the server! This is probably the most important benefit of a message-based approach, although by no means the only one.

Business Scenario 2: A File Format Changes

The file must now be sorted by last name. Originally, the name field was a single field. Now, however, I need to separate the data out into individual fields for first name, last name, and middle initial. EMPNAM now becomes EMPFNM, EMPLNM, and EMPINI. See Figure 6.

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..

MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale



Support MC Press Online

$0.00 Raised:

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: