04
Mon, Nov
6 New Articles

Give Your Output Queues a Trigger Finger

General
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

All of our lives revolve around data. In the past, what you could do with data on the AS/400 was limited. You could gather it via a green-screen or a handheld scanner. Then, after the data was entered, programs or queries manipulated and reported the data in a way that told you what was happening with your business. The data was reported through screens or printed reports. Today, the rules are changing. New technologies have made it easier for the AS/400 programmer to retrieve and report data. The Web, for instance, is a great place to gather and report data. Email is another way to distribute data that is quickly catching on.

The only problem that you, as an AS/400 programmer, face is getting your data to the Web or to an email or distributing data to multiple users through output queues. To do so, you usually need to either rewrite existing applications or perform these tasks manually. We already have reporting programs, so why not use them to distribute these spool files in another way? This can be done with a concept that is new to some yet forgotten to others.

The Queue Connection

For those who are unfamiliar, I’d like to explain what queues are before I get into the main topic of this article. A queue is an object, however abstract in its general sense, that stores and releases data by using, most commonly, the first in, first out (FIFO) scheme. Queuing means placing objects in a line to be processed on a first-come, first-served basis. A queue is like the line in which you stand waiting to buy a movie ticket: The first person in line is the first person to get a ticket. A queue on a computer works the same way. Data is made available in the order it is received.

The AS/400 contains many different types of queues. There are output queues, job queues, data queues, and user queues, just to name a few. You may be familiar with output queues, where your reports sit while waiting to print. To the casual AS/400 user, data queues are not as popular as output queues, but, to many programmers, they are very important tools. Data queues are often used to pass data between two sources that otherwise couldn’t communicate directly with each other. For example, PCs and AS/400s use data queues to communicate in such applications as barcode scanners and data replication tools.


The AS/400 can use data queues for many different purposes. In this article, I will show you how to associate a data queue with an output queue so that, when reports become available on an output queue, data is written to a data queue to alert you of the report’s existence. There is enough data provided in the data queue to do almost anything you want with the report.

Making the Link

A data queue can be associated with an output queue by using the Create Output Queue (CRTOUTQ) or Change Output Queue (CHGOUTQ) command to specify a value for the Data queue (DTAQ) parameter. Once a data queue is associated with an output queue and a spool file reaches ready (RDY) status, an entry is placed on the associated data queue that contains information identifying the spool file. This data can then be retrieved from the data queue and used to do pretty much anything you can imagine.

The first thing you must do is create the data queue. This is done by using the Create Data Queue (CRTDTAQ) command to specify the name, the library (where the data queue resides), and the maximum length of the data queue. Another one of these parameters is the sequence parameter in which you can specify first in, first out (FIFO) or last in, first out (LIFO) processing. There are other parameters that you can specify, such as the order of processing, but the defaults for these suffice. For the process I am using, I will stick with the default of FIFO.

The next step is associating a data queue with an output queue. This is the easy part. One command will do the trick. For example, let’s say I have an output queue named OUTQ1 that I want to associate with the data queue DTAQ1. I would use the following command:

CHGOUTQ OUTQ(lib/OUTQ1) DTAQ(lib/DTAQ1)

If I were creating a new output queue and wanted to specify a data queue, I would use the following command:

CRTOUTQ OUTQ(lib/OUTQ1) ... DTAQ(lib/DTAQ1)

Once a data queue is associated with an output queue, an entry is added to DTAQ1 when a spool file reaches ready status on output queue OUTQ1. How do you know when you’ve reached ready status? This is a good question. A spool file reaches ready status in the following situations:

• When the job creating a spool file completes, normally or abnormally, or when the spool file is opened and the schedule parameter value is *IMMED

• When the spool file is released from the HOLD or SAVE status

• When a spool file is moved to one output queue from another

• When a writer is ended immediately while printing a spool file (the status will go from WRT to RDY)

It is important to understand each case so that, when your data queue gets an entry, you’ll know why. The OS400 Printer Device Programming V4R4 manual explains these cases in much more detail. I suggest that you study the section on associating data queues with output queues before implementing this process, but you should be concerned only with the first example.


Data is placed onto the data queue in a specific format so that it can be parsed easily by using a data structure. This data contains all of the information needed to uniquely identify a spool file, such as the spool file name, the spool file number, the file name, the user name, the job name, and the job number. Figure 1 shows the layout and describes each section of the data that is placed on a data queue. More information on this data layout can be found in the OS/400 Printer Device Programming V4R4 manual as well.

As I said earlier, getting the data to the data queue is the easy part. Retrieving the data from the data queue is a little more challenging. This involves using the Receive Data Queue (QRCVDTAQ) API. If you are not familiar with using APIs, don’t worry. This API is actually one of the easier ones to use.

Receiving Data from the Data Queue

The Receive Data Queue (QRCVDTAQ) API includes a set of required parameters and two other sets of optional parameters. For this example, I will discuss only the required parameters. If you wish, you can go back and include the optional parameters. Some of the other parameters allow you to retrieve more in-depth information about the job that placed the spool file on the associated output queue. More information on this API can be found in the OS/400 Object APIs V4R3 manual.

The first parameter of the QRCVDTAQ API is the data queue name. This is a 10- character input field. The next parameter is another 10-character field that contains the library in which the data queue specified on the first parameter resides. And the next parameter is a packed field of five digits with zero decimals that returns the length of the data returned by a call to this API. The next parameter is a character field that can vary in length. This field should be at least as big as the data queue that is specified on the Create Data Queue (CRTDTAQ) command. The last field is an input field, packed with five digits and zero decimals, that allows you to specify a wait time to grab information from the data queue. A value greater than zero specifies the number of seconds to wait before attempting to receive data from a data queue. A value of zero tells the API to continue processing immediately. A value of less than zero tells the API to wait indefinitely or until the job ends. This is the value I usually use because, once an entry is placed on the data queue, the command is processed. This saves CPU cycles, as data is retrieved only if it exists on the data queue. See the OS/400 Object APIs V4R3 manual for more in-depth information about any of these parameters.

An example of how to call the QRCVDTAQ API is shown in Figure 2. When called, this program waits for an entry to be made to the DTAQ1 data queue in the MYLIB library. When an entry is made, this API returns the data to this program, executes subroutine $GotData, and then checks for another entry. If there isn’t more data in the queue, it will wait for more. If there is more data, it will continue to return the data to the calling program until the queue is empty. This is because I have specified the wait length of -1. You can replace the contents of the $GotData subroutine with anything you wish to do when you receive an entry on your data queue.

You’ll also notice that I have a loop that looks at the data received. This loop continues until the data received equals the value ‘*QUIT’. This is a way to end what would otherwise be a never-ending job. The value ‘*QUIT’ is placed in the data manually by using the Send to Data Queue (QSNDDTAQ) API. An example program is shown in Figure 3. This program accepts a data queue name and library as parameters and then calls the QSNDDTAQ API passing the value ‘*QUIT’. This will end the job that is monitoring the data queue. Or you could simply end the job manually every night. It’s up to you.

I’ve Got the Data. Now What?

Now that you know how to retrieve data from a data queue, refer back to Figure 1 to see the data format for output queues. Using this data alone triggers events that depend on spool file name, job name, job number, or user name. But if you want to use more


attributes of the spool file to determine what you should do when the spool file reaches the ready status, another API must be used. This API is Retrieve Spooled File Attributes (QUSRSPLA). You can use this API to retrieve information about the spool file, such as number of pages, user data, and form type. See the OS/400 Print APIs V4R4 manual for more information on this API.

Using this information to help your users is only limited by your imagination. Users often request multiple copies of reports. Some want the report sent to 10 different locations in your office. Others want the report emailed to them and their supervisor or even posted on the Web. Now that you have the information to identify a spool file, these requests can be fulfilled.

In the article “Three Steps to Report Duplication and Distribution,” (MC, March
2000), I showed you the Duplicate Spooled File (SPL2SPL) command. This command lets you duplicate a spool file to up to 10 different output queues. This tool makes report distribution a little easier. And with the use of data queues, you can automate this process.

Let’s say that you distribute a month-end sales report to five different users—each on a different output queue. Each month, after the report is completed, you could use the SPL2SPL command to distribute the sales report. Since this happens only once a month, it’s not difficult to do. But it can be tedious, especially if more reports have to be distributed this way once users find out how “easy” it seems for you to accomplish.

If you attach a data queue to the output queue in which the sales report is generated, you can write a program that monitors this output queue for the particular report. When the report completes and enters the ready status, a record will be written to the data queue. The program that is monitoring the output queue will recognize the report either by data in the data queue, such as file name or job name, or by other attributes that you retrieve with the QUSRSPLA API, such as the user data parameter. When the program recognizes the month-end sales report, it automatically calls the SPL2SPL command and distributes the report. Brilliant!

You could even take this a step further. If you already have a tool that emails a spool file, you can have your program call this command as well, emailing the spool file to its intended recipients. Or, even easier, you can send an email to users alerting them that the month-end sales report is available on output queue PRT01.

Another idea for distribution that seems to be catching on is posting spool files on the Web or an intranet. This method reduces network traffic because your aren’t sending a large attachment to more than one recipient. Instead, you convert the spool file to an HTML document, place it on the AS/400 Integrated File System (AS/400 IFS), and send out an email giving users the URL for the document. Only one copy of the spool file exists, and users can look at it, print it, or save it to their local machine by using their favorite browser. This method of distribution saves disk space and intranet bandwidth and brings you one step closer to a paperless office.

Use Your Imagination!

This technique allows you, your staff, and your users to achieve anything that can be imagined. Because of the substantial increase in use of new technologies, I felt it was time to revisit some of the old “tricks” that the more seasoned AS/400 programmers may have forgotten. Using data queues with output queues is one of those ideas that is great in theory, but finding a good use for it has been difficult. Now, with the option to email reports or post them on the Web, you have a reason to apply some of those ideas.

References and Related Materials

• OS/400 Object APIs V4R3 (SC41-5865-02, CD-ROM QB3AMQ02)
• OS/400 Print APIs V4R4 (SC41-5874-03, CD-ROM QB3AMZ03)
• OS/400 Printer Device Programming V4R4 (SC41-5713-03, CD-ROM QB3AUJ03)


• “Three Steps to Report Duplication and Distribution,” Bradley V. Stone, MC, March 2000

Field Size Data Type Description

Function 10 Character Describes the function that placed the entry on the data queue. The value is *SPOOL when used with an output queue.

Record Type 2 Character The record type within the function. When used with an output queue, the value is 01, which means that a spool file has reached the RDY status on an output queue.

Qualified Job Name 26 Character The qualified job name of the spool file placed on the output queue. The first 10 characters are job name. The next 10 characters are user name. And the last six are job number.

Spool File Name 10 Character The spool file name of the spool file placed on the output queue. Spool File Number 4 Binary The spool file number of the spool file placed on the output queue. Qualified Output Queue 20 Character The name of the output queue. The first 10 characters are the output queue name, and the last 10 are the output queue library.

Figure 1: This is the layout of data in a data queue when associated with an output queue.

FQPRINT O F 132 PRINTER OFLIND(*INOF)

****************************************************************

D WPDtaQ S 10A INZ('DTAQ1')

D WPDtaQLib S 10A INZ('MYLIB')

D WPQLength S 5P 0

D WPQData S 128A

D WPQWait S 5P 0 INZ(-1)

D First S 1 INZ('Y')

****************************************************************

C RFDTAQ PLIST

C PARM WPDtaQ

C PARM WPDtaQLib

C PARM WPQLength

C PARM WPQData

C PARM WPQWait

*

C dow (WPQData '*QUIT')

C EXSR $GotData

*

C CALL 'QRCVDTAQ' RFDTAQ

C enddo

*

C eval *INLR = *On

****************************************************************

* Replace the contents of this subroutine with whatever you

* want to happen when an entry is retrieved.

****************************************************************

C $GotData BEGSR

*

C if (First <> 'Y')

*

C if (*INOF)

C EXCEPT HDG

C eval *INOF = *OFF

C endif

*

C EXCEPT DETAIL

C else

C EXCEPT HDG

C eval First = 'N'

C endif

*

C ENDSR

****************************************************************

OQPRINT E HDG 2 2

O 4 'Page'

O PAGE 10

O E DETAIL 1

O 14 'Entry received'

O E DETAIL 2

O WPQData 128


Figure 2: This is an example of how to call the Receive Data Queue (QRCVDTAQ) API.

D WPDtaQ S 10A

D WPDtaQLib S 10A

D WPQLength S 5P 0 INz(%size(WPQData))

D WPQData S 128A INZ('*QUIT')

****************************************************************

C *ENTRY PLIST

C PARM WPDtaQ

C PARM WPDtaQLib

*

C ENDDTAQ PLIST

C PARM WPDtaQ

C PARM WPDtaQLib

C PARM WPQLength

C PARM WPQData

*

C CALL 'QSNDDTAQ' ENDDTAQ

*

C eval *INLR = *On

Figure 3: Here is an example of how to call the Send Data Queue (QSNDDTAQ) API to end the output queue monitoring.


BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: