26
Thu, Dec
0 New Articles

VisualBasic and Data Queues

Visual Basic
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

As personal computers take on the role of application servers, offloading processing cycles from the AS/400 host, a simple and easy way of moving data back and forth between the PC and the host is needed. Users who need full-blown transaction processing and can’t afford to lose any speed should use Advanced Program-to-Program Communications (APPC) or Common Programming Interface-Communications (CPI-C) programming. While not the most difficult programming situation, APPC is not for the faint of heart and can only be used in a Systems Network Architecture (SNA) environment. There is an alternative for those of you who have less than 5,000 transactions per day, can afford some downgrades in speed, or who are using a LAN running TCP/IP—data queues. Data queues have been around since the early System/38 days and became available in Client Access/400 at OS/400 V2R1, but are not often used. Many people simply don’t understand how to use them or even that they exist. Data queues are extremely simple to implement and require almost no overhead to maintain and monitor on the AS/400 side. As evidence of their efficiency, check out all of the subsystem jobs on the Work with Active Jobs screen. They all sit in a de-queue wait state.

In this article, I will present the two building blocks needed to use data queues in your applications: the declaratives and the actual call to the data queue APIs. The declaratives are used to tell Visual Basic which APIs will be used and establish the variables needed in the program. Next, the actual code to access the data queue will be presented. I’ll leave the application code itself for you to create. All you will need to do to achieve a true client/server environment is add the declaratives and the API calls to your own program.

What Are Data Queues?

Data queues are system objects that are accessible by any high-level language (HLL). They are also the fastest asynchronous communications method between two tasks


on the AS/400. The only structure that rivals the speed of a data queue in asynchronous communications is a user queue, and data queues are nothing more than specialized user queues. User queues require access at the machine interface layer and are not as easy to use, whereas data queues have reasonably friendly user interfaces.

Each data queue contains a series of entries. You can use and think of them much like the stack structure available in many other environments. Entries can be accessed in one of three ways: either first in, first out (FIFO); last in, first out (LIFO); or, in more recent versions of OS/400, directly by key. A really useful feature of keyed data queues is the ability to access them in the same way FIFO keys are processed. If you do not specify a key for the retrieve API, a queue will be sequentially processed in keyed order. In this way, entries can be forced to the front or end of a queue for processing. This is a very handy way of setting up shutdown commands for the queue process. The RPG/400 code segment presented in this article uses this technique. A nondestructive read is also available, so entries are not removed from the queue when they are accessed. Usually a queue entry is removed from the queue when it is accessed, just as a job queue entry is removed from the queue when the job moves into the subsystem to be processed.

Because data queue entries are difficult to back up, they should not be used if the data placed into them cannot be recovered via another method, nor should entries be left in data queues during the IPL process. Data queue entries rarely get lost, just as we rarely lose job queue or spool queue entries, but the chance does exist, so critical data needs a recoverable source other than the data queue itself.

Data Queue Commands

Data queues are created, deleted, and changed with the CRTDTAQ, DLTDTAQ, and WRKDTAQ commands, respectively. The create and delete commands are available to batch, online, and REXX procedures. WRKDTAQ is only available online. Like any other object created on the system, initial security can be determined by default or specified. I secure objects that most users would not normally access, giving access to only the processes that need it. This minimizes the chances that a data queue will accidentally be cleared. One other note: The queue’s text description should be consistent with the use of the queue and the type of entries contained in it. Using the words data queue in the description is simply a waste of words. When you create a data queue, you must specify the name of the queue and how long the queue entries need to be. You can also define the order the entries are presented in, the length of the key (if it is a keyed queue), and whether the sender ID should accompany the queue entry. Many applications can benefit from knowing the origination point of the queue entry, so the application may react accordingly. Because the sender’s ID can be automatically embedded with each data queue entry, data queues have an advantage over database files in some circumstances. An example of an application that would benefit from the sender ID is an order entry program. When a customer service representative completes an order taken over the telephone, the data queue application would already know which representative sent the queue entry, and a personalized fax could be sent to confirm the order. If the user profile was stored only in the database, the application would need to have that file open and retrieve the record in order to do the same thing. The data queue is much faster.

DLTDTAQ merely deletes the queue from the system. No check is done to see if there are entries in the queue. The DLTDTAQ command also offers a generic name parameter, so if your naming convention is a good one, daily maintenance of the queues becomes quite easy. The WRKDTAQ command allows user to manage the queues online, but the basic attributes of the queue—length, key length, and sequence—cannot be changed. You must delete the queue and recreate it to change these attributes.

If you are running OS/400 V3R1 or lower, the user tools contain a Display Data Queue (DSPDTAQ) command that will copy the contents of the data


queue into an online subfile for viewing. Created before most of the APIs for data queues existed, this command uses the dump object command to a spool file and uses an RPG program to read the spool file and present the subfile.

The Called Modules

At the heart of the data queue interface are called modules; the modules allow any HLL to access the queues. There are five main modules: the send, receive, clear, retrieve data queue message, and retrieve queue description APIs (QSNDDTAQ, QRCVDTAQ, QCLRDTAQ, QMHRDQM, QMHQRDQD, respectively). The exact form of each of these calls can be found in the OS/400 System API Reference (SC41-3801-00; CD QBKAVD00). The retrieve data queue message API will not remove the entry from the queue. This can be extremely useful in several circumstances and, when you are using the DSPDTAQ command, eliminates the need for the Dump Object (DMPOBJ) command, because the RPG/400 program can retrieve the entries from the queue directly. In our examples, we will be using the send and receive data queue called modules. This will give you a good idea of how all the modules work.

Uses of Data Queues

Data queues have several distinct advantages over message queues and data areas in transmitting data between applications. Message queues do not allow multiple jobs to allocate the queue. Once a message queue is allocated by one job, no other job can retrieve and remove entries from the queue at the same time.

Message queue processing is also among the slowest queue processing methods available on the AS/400. As with data queues, the sender can be retrieved from an entry on a message queue, but it is more difficult to do so. Additionally, message queues have a predefined size limit that may cause problems if the number of transactions starts to grow or peak at certain times. Like data queues, the entries on a message queue are not backed up when the message queue object is saved.

Data areas offer some advantages over message queues in that multiple jobs can access and change data areas, but, because of the size limit of the data area, large quantities of entries cannot be processed with data areas. The sending application will need to know when the first entry was processed so it can load the data area with the next transaction.

Adding and Processing Entries

Data queues are populated using the QSNDATAQ program. Multiple programs can place entries on the queue for processing. Figure 1 shows how several programs can populate one queue. In this case, the three inputs to the queue are any combination of online programs that send queue entries to the queue for asynchronous processing. The online sessions can be almost any combination of sessions you can conjure up: programmable workstation sessions, terminal sessions, or dial-up connections. In our hypothetical situation, a field salesperson is dialing into the AS/400 and using data queues to place orders directly on the system. Order confirmation and shipment dates are then sent to the customer via fax. Data queues are the vehicles that pass the information needed in the confirmation process to the server program.

Batch processes can also populate the data queue in the same way. A clever way of adding this type of function without changing the order entry program would be to add a trigger to the order entry header database file. The trigger could be set up to send the data queue entry when a record is successfully added to the file. In this way, the synchronous trigger program will finish very quickly, so the online program is not slowed down, and the asynchronous job can process the data queue entry without affecting the online response time.

An unlimited number of server programs can retrieve and process entries from a single data queue. The biggest problem created by this ability is the possibility that a


“deadly embrace” may occur. A deadly embrace occurs when server program A reads record A for update (thus locking the record) and subsequently needs to read record B to continue processing. At the same time, server program B has read record B for update (locking it as well) and needs record A to continue. Neither program can continue until one of the record locks is released, which is usually accomplished by canceling one of the programs. Careful design and the proper use of I/O routines will make server programs operate in a manner that is fast and reasonably free of errors.

Figure 2 show a situation in which multiple server programs can access, remove, and process entries on the queue. In the order entry example cited above, the server program actually does the processing to send the order confirmation as the result of retrieving a data queue entry. Because the data queue server program will do almost no I/O processing and will only write confirmation records to the database as the very last function, very few database updates are done, eliminating record locks and seize locks from the program. If the system fails between the time the data queue entry is retrieved and the update of the confirmation file, the order entry system will not have been flagged with an order acknowledgment. A recovery program can find all of the orders that did not have a confirmation sent out and repopulate the data queue accordingly. Complete recoverability of the database is thus built into the application.

Another advantage of data queues is the number of operating environments that can access them. APIs are available for DOS, OS/2, and both Windows 3.1 and Windows 95/NT environments. Of course, any AS/400 can also access data queues. You have the ability to create very inexpensive and responsive client/server applications without having to do anything more than purchasing Microsoft Visual Basic and learning four or five APIs. Once you understand the APIs, the rest of the API world awaits you, as most of them operate in the exact same fashion as the data queue APIs.

Client Access/400 Setup

There are only a few things that need to be done to set up the Client Access/400 portion of our hypothetical scenario. Before starting development, you will need to copy a few Dynamic Link Libraries (DLLs) to the PC. You also need to acquire the WINDOWS.H file that comes with the Microsoft Windows Software Development Kit. This file will only be needed for development. If you are using Windows 3.1, your Windows environment will need to be in the enhanced mode. Your application program must be linked with the Client Access/400 import library EHNAPIW.LIB, and the directory name where Client Access/400’s Windows DLLs are located must be listed in your path statement. The DLLs are shipped in the QPWXCWN and QPWXCWND folders on the AS/400.

It doesn’t matter which version of Visual Basic you use. I personally use Enterprise Edition 4.0. When Client Access/400 starts up, two jobs will start in the communications subsystem. These jobs will monitor for upload and download activity. Depending on the amount of data that will be moving from the PC to the AS/400, the Client Access/400 configuration program can be used to determine the size of the buffer that is used by these two communications jobs. Making the buffer too large or too small will adversely affect the performance of the data queue function. The configuration program is documented in Chapter 14, “Configuring Client Access/400 with the Configuration Program,” in the Client Access/400 for DOS Ext. Memory Setup manual (SC41-3500-01; CD QBKAKC03).

The list of all the DLLs you’ll need for our sample application is shown in Figure 3. You can also find this list in the Client Access/400 for Windows 3.1 API technical reference (SC41-3531-01; CD QBKAKN01). At the very least, the DLLs for the router, data queues, and translation programs need to be copied to the development PC. These DLLs will also need to be included in the setup program that will be created in the Visual


The DLLs

Basic Setup wizard when the program is distributed. A method for updating these DLL members when IBM sends out PTFs that affect them will also be needed.

The Visual Basic Program

Our Visual Basic program can be broken up into three parts. The first part is the load form, which will declare all the global variables that are needed, as well as the APPC function calls needed for Visual Basic to invoke the data queue APIs. The second part (which will not be presented here) does the work of the application program. The last part of the program sends the data queue entry to the AS/400, where the queue entry will be retrieved by the RPG/400 program and processed. Figure 4 shows the load form code with the declare function statements. The user password, system name, data queue name, and data queue library were retrieved in another portion of the program by using an INI file structure. The INI file was updated with the most current password from the user, as part of the process of signing on to the Visual Basic program. Only the important parts of the code for the data queue are shown. The declare function to get the INI file is followed by the send declaration for the data queue process. The set mode function is also shown, because the data will need to be converted from ASCII to EBCDIC format prior to being sent to the AS/400. The reverse is true when bringing a queue entry back from the AS/400. I usually build a master list of all the APPC functions needed and copy them into program en masse, because I am never sure when I might use one of them.

The RPG/400 Server Program

The RPG/400 program should be designed to be very simple; do the minimum amount of I/O; and, if the program does need to read files for update, fetch the records and update them with as little processing between the read-for-update and the write-operation- code as possible. The client code that sends the data queue is shown in Figure 5. The RPG server program code that recieves the entry is in Figure 6. In the RPG program shown in Figure 6, the main loop will wait for a data queue entry to be received. It will first check to see if the queue entry calls for the program to shut down. If so, simply turn on the last- record indicator and end. Do not leave the files open, because this program does not restart often. Next, call the program that actually does the confirmation of the order, write the confirmation record, and loop back to the start. This program could also be designed as an Integrated Language Environment (ILE) module and be bound to the confirmation program for faster performance. There is really no reason this program could not be a control language program, except for the write operation that is being used to change the database. If you have the user tools, (Library QUSRTOOL prior to V3R6 or V3R2) the CL program could use the user tool called WRTDBF to accomplish that task since a called program does most of the work.

The best way to handle server programs for these types of data queue functions is to set up a subsystem with autostart jobs that will start up as many servers as are needed to handle the load. Since the amount of memory these programs are likely to use is very small, not much memory will need to be allocated to the subsystem. I usually just set the subsystem up to use the same shared pool as the batch subsystem. The class and other run attributes can be set up to favor some processes and allow others to fall behind in times of heavy processing as needed. Figure 6 shows the RPG/400 program.

Performance

Performance of the data queue process on the AS/400 is limited only by resources that are available to the communications job running the router connection and the data queue server itself. If communications jobs are shorted for memory, or there’s an insufficient activity level configured for the subsystem, some unpredictable communications errors may occur. Once the router has lost it grip on the host, the only way


to recover the session often is to start it over again. This can be troublesome for dial-up users who may have to reconnect a long distance call to enter their data.

The data queue server program needs to be very efficient code. The best way to avoid problems with the server program is to limit its function. If there are I/O- or CPU- intensive tasks that need to be done by the server program, see if the design will allow having a sub-task to asynchronously process the data. That way, the server program can go back and get the next queue entry as quickly as possible. The Performance Explorer available in V3R6 will allow you to tune the AS/400 program to its maximum efficiency. (If you are running V3R1 or V3R2, you will need the Timing and Paging Statistics PRPQ.)

The PC code must also be written to eliminate extra processing that is not directly needed by the queuing process, reducing the size and complexity of the program. Since the editing of the data should be done in other applications, once the SUBMIT button is pressed to start the queuing process, things should move quickly.

Conclusions

Next time you need to move data between the AS/400 and a PC application, you don’t need to use the programming-intensive APPC, ODBC, or even know how to write a Dynamic Data Exchange (DDE) application (although there is a DDE interface for those of you who would like to write it that way). You can use the data queue APIs provided free with the AS/400 and Client Access/400. The communications overhead is limited and the AS/400 code is very simple. While not for full-blown transaction processing, a data queue program can shorten the length of time it will take to put up a true client/server application, and most shops have the programming talent already on staff. If your shop is trying to produce some client/server applications and you need to have one running quickly, this may be your best opportunity to try data queues.

Figure 1: Several Programs Can Populate a Single Queue


VisualBasic_and_Data_Queues06-00.jpg 425x243

VisualBasic_and_Data_Queues07-00.jpg 425x237

Figure 2: Multiple Server Programs Can Access, Remove, and Process Entries on the Queue

EHNAPPC Client Access Router EHNDQW Data Queues DLL EHNDTW Data Translation EHNNETW Network Redirector EHNODBC ODBC DLL EHNRQW Remote SQL EHNSFW Shared Folders EHNSRW Submit Remote Commands EHNTFW File Transfer
EHNVPW Virtual Printer EHNCLN1 internal
ENHCLN2 internal
EHNCLP1 internal
EHNCL01 internal
EHNCL02 internal
EHNHAW internal
EHNRTRW internal
PCSMOND internal

Figure 3: The Necessary DLLs

Public SendString As String ‘* 263

Public RecString As String ‘string received from AS400

Dim RecordID As Integer ‘ Define elements of data type.

Dim Ordernum As String * 5

Dim OrderData As String * 241

Public strUserid As String

Public strPassword As String

Public strSystemname As String

Public strUploaddataQueue As String

Public strDownloaddataqueue As String

Public strLibrary As String


‘Declare Function for getting Data from INI file Declare Function GetPrivateProfileString Lib “Kernel” (ByVal lApplicationName As String,

ByVal lpKeyName As String,

ByVal lpDefault As String,

ByVal lpReturnedString As String,

ByVal nSize As Integer,

ByVal lpFileName As String) As Integer

‘Declare APPC functions

Declare Function EHNDQ_Send Lib “EHNDQW.DLL”

(ByVal hWnd%,

ByVal lpszQueueName$,

ByVal lpszQueueLocation$,

ByVal lpachDataBuffer$,

ByVal lDataLength&) As Integer Declare Function EHNDQ_SetMode Lib “EHNDQW.DLL” (ByVal hWnd%,

ByVal lpszQueueLocation$,

ByVal lMode&) As Integer

Figure 4: The Visual Basic Code to Declare All Variables

Public Sub DynasetCreate()
'*********************************************************************

' Name:- DynasetCreate *

*********************************************************************

' Function:- This subroutine sends one record at a time *
' to AS400 in the form of a string 241 characters long. *
' *

'*********************************************************************

Const dbReadOnly = 4 ' Set constant.

T1.Enabled = False

cmdreport.Enabled = False

SendString = ""

SendString = RecordID & OrderNum & OrderData

SendToDataQueue = False

txtstatus.Text = "Loading Data to AS/400....."

txtstatus.Refresh

SendBuffer = SendString

que$ = strLibrary + "/" + strUploaddataQueue

sys$ = strSystemname + Chr$(0) 'strUploaddataqueue

quelen = 241

nRtnCode = EHNDQ_Send(frmMonitor_AS400.hWnd, que$, sys$, SendBuffer, quelen)

DoEvents

If nRtnCode <> 0 Then

MsgBox ("#2 APPC Error" + Str$(nRtnCode))

Exit Function

End If

SendToDataQueue = True
End Function
End Sub

Figure 5: The Visual Basic Code to Send the Data Queue Entry

****************************************************************

* To Compile: *

* CRTRPGPGM PGM(DTQ001RG) SRCFILE(XXX/QRPGSRC) *

* *

* No other parameters are needed for this program. *

****************************************************************

FSNDFAX O E K DISK A

*

****

* Data Queue Entry Structure

*

I#ENTRY DS


I 1 5 #END

I 6 10 #ORDR

I 11 256 #DATA

*

* Parameter List

C RCVDQ PLIST Parm list

C PARM #DTAQ 10 Queue Name

C PARM #QLIB 10 Queue Library

C PARM #DLEN 50 Data length

C PARM #ENTRY241 Queue Entry

C PARM #WAIT 50 Wait Time

*****

* The following parameters are available but not used in this

* application .

*

* PARM #KYTYP 2 Key Type

* PARM #KYLEN 30 Key Length

* PARM #SNDDV 10 Sender Device

* PARM #SNDLN 30 Sender Length

* PARM #SNDID 10 Sender ID

*

C MOVEL'FAXDQ' #DTAQ Queue Name

C MOVEL'FAXLIB' #QLIB Queue Lib

C Z-ADD241 #DLEN Entry Length

C Z-ADD-5 #WAIT Wait Forever

C MOVE *BLANKS #ENTRY Init Entry

*

* Set Processing loop and proceed

*

* While the last record indicator is not on, recieve

* queue entries, then call the processing program.

* Write the audit record out and loop back for the

* next queue entry.

*

C *INLR DOWNE*ON

C CALL 'QRCVDTAQ'RCVDQ

*

C #END IFEQ 'STOP'

C MOVE *ON *INLR

*

C ELSE

C CALL 'FAXPGM'

C PARM #ORDR

C PARM #DATA

C WRITEFAXRCD

C MOVE *BLANKS #ENTRY

*

C ENDIF

*

C ENDDO

***************************************************************

Figure 6: RPG Program to Retrieve Data Queue Entries


BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: