28
Sat, Dec
0 New Articles

Creating Real-time Interfaces for Data Warehousing and Reporting Applications

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

 

Maintaining timely, consistent, reliable data movement between transaction and reporting systems is an ugly, thankless job, but somebody’s gotta do it. If that somebody is you, read on. This article presents an easy alternative to batch-oriented data replication and transfer that might make your life a little easier and your users a little happier.

 

 

Traditional interface designs tend to be batch-oriented, scheduled to kickoff and “sweep” files at regular intervals or as part of overnight procedures. Transactions that occur throughout the day on the transaction system are not captured and available on the reporting system until the batch collection and transfer process executes. This delay can lead to confusion and frustration in the user community.

 

 

Suppose you could eliminate the batch processing delays and update reporting systems as transactions occur? Reporting systems would always be current to transaction systems, and there’d be no more problems trying to schedule synchronization of the AS/400 and other platforms.

 

 

Without buying third-party software, you can create a seamless interface between two systems, with no modification to your existing software. You can accomplish all this by using a journal receiver to capture transaction file changes, and a concurrent task to monitor, receive, and process these journal entries as they occur.

 

 

Preliminaries

 

 

I will now create the transaction file and reporting table. I’ll set up a simple item file to show you how this system works. The data elements stored in this file are part number, description, unit of measure, and some quantity fields. I’ll refer to this file as the transaction file since it represents the file updated as a result of business transaction processing. The DDS to create the file is shown in Figure 1.

 

 

Now, I’ll create a reporting (“target”) file representing the transaction Part Master in our data warehouse. For simplicity, I’ll keep this new table on the AS/400 and in the same record layout as the transaction file. Bear in mind, however, that this new table represents the replicated item master on some target system, which could be running on the same AS/400, a different AS/400, or a different computing platform altogether. (I used the SQL command shown in Figure 2 to create the reporting table on a Windows NT system running the Oracle RDBMS.)

 

 

Using SQL to update the target file also affords us more flexibility, as when the reporting file is on a database other than DB2/400. If SQL is absent on your AS/400, simply substitute native DDS and I/O operations for the code in this example.

 

 

Using OS/400 to Capture File Changes

 

 

The next step is to start up a journal to capture all changes to the transaction Part Master file. The journal receiver acts as a “transaction ledger,” recording before and after record images of every change to the transaction file. You’ll need to create a journal receiver and then start the journal, using the commands shown in Figure 3.

 

 

Using the journal leaves nothing to chance. It is virtually impossible for the OS/400 system to miss a change to a journaled file; therefore, you can rest assured that your program will be informed of all changes to the file as they occur. It is not necessary to make any modifications to existing programming or files to achieve this real-time function.

 

 

(For more information about journaling, see “SYSOP: Journaling, Part 1,” MC, January 1998, and “SYSOP: Journaling, Part 2,” MC, February 1998.)

 

 

Creating the Control Data Area

 

 

Data area PMP@ is used to control processing of the file monitor by providing information such as the journal name, operational status, exit program name, and last journal sequence number processed. I’ve made the data area name the same as the file name, with the “@” character appended to the end so that file PMP becomes data area PMP@.

 

CRTDTAARA DTAARA(XXX/PMP@) +

TYPE(*CHAR) LEN(1024) +

TEXT(‘control_data_area_+

for_journal_monitor’)

 

I created file JRNCTL to describe the PMP@ data area (see Figure 4). There are three reasons for this:

 

 

• The data area consists of subfields.
• The data area is referenced by three programs.
• Other monitored files also need to use data areas of this description.

 

 

Journal Monitor CLP

 

 

Figure 5 shows the source for JRMONC, the CL program that monitors the journal for changes to the file. This code works as a driver for the exit program named in the PMP@ data area; whenever a journal entry is received, the exit program from the data area will be called to process the entry. In this case, RPG program JRMONR acts as the exit program. It processes the file changes to the target reporting file as they are received and then returns control to the monitor program JRMONC until the next change is received.

 

 

JRMONC has three essential code elements:
• The first element retrieves the control data area used to store information about the file being monitored. By using the monitored file name as the parameter, JRMONC allows the program to serve as a generic driver program for all files being monitored by the system.

 

 

• The second element makes sure the data area status byte is blank. A separate command manipulates this byte to a nonblank condition whenever it is necessary to shut

 

 

down the program. In a complete implementation, this would also be a good spot to insert the code necessary to change the journal receiver. I advise changing the journal receiver frequently—especially on high-transaction volume files.

 

 

• The third element is the Receive Journal Entry (RCVJRNE) command. This command monitors the journal, receives journal entries as they are created on the system, and calls the exit program to process each journal entry to the target file. When control is returned from the exit program, the journal monitor will wait until the next journal entry is received and then repeat the process.

 

 

Only record-level changes (journal code R) and user-initiated journal entries (journal code U) are used. All other journal entries are ignored. The RCVJRNE command passes two parameters to the exit program: the receiver data and a 1-byte status code. Whenever the exit program returns a 9 in the status code byte, the RCVJRNE command is ended, allowing the CL program to end normally.

 

 

Because of the structure of the CL program, you simply have to “clone” the RPG exit program to add additional files to your journal monitor system.

 

 

Journal Processor JRMONR

 

 

Figure 6 contains the source code for the journal processor, JRMONR. This program receives the journal changes made in the transaction file and applies them to the reporting file.

 

 

The structure of JRMONR is very straightforward: a mainline routine with first- time and end-of-job subroutines. The program has insert, update, and delete capability over the target file. Prior to updating the target file, the program compares the before and after images of the monitored file to make sure the record really was changed. Although I used SQL to apply the changes to the target file, you may prefer native AS/400 I/O operations if your interface requirements are confined to AS/400 systems.

 

 

The first-time subroutine (*INZSR) is used to define the *ENTRY parameter list. It also defines the SQL add, update, and delete statements and prepares these SQL statements for first use. The End of Job ($EOJ) routine performs cleanup and provides an orderly shutdown.

 

 

Definition specifications—The BEFORE and AFTER data structures are externally defined to the PMP file record layout. The Prefix keyword is used on the BEFORE data structure to substitute the characters XX for the first two characters of each subfield name.

 

 

JRNENT is the data structure of the journal entry passed into the program as a parameter. The first 125 bytes of JRNENT contain journal control information, and the program uses elements of this to update the control data area, PMP@. The receiver data contains the file data starting in position 126. This data is variable length, so I adjusted the length of this data to match the record length of the monitored file. In this case, the file record layout is 62 bytes, so the JNDTA subfield is also 62 bytes. (Setting the parameter length to equal journal entry length is not necessary, according to the CL Reference. If the program parameter length is less than the journal entry, however, journal receiver data is truncated. If the program parameter length is greater, “nonessential information” is placed into the additional parameter positions.)

 

 

Journal code and journal entry types—The detail calculation section of the program examines the journal code and journal entry type and branches accordingly.

 

 

• If the journal entry type field, JNTYP, contains the letters UB (a file image before update), the program places the image in the BEFORE data structure and returns. The before image will be compared to the after image on the next cycle.

 

 

• If JNTYP contains DL, the SQLDELETE statement is executed to remove the record from the target file.

 

 

• If JNTYP is PT or UP, but the before and after images do not match, the program executes an update attempt, via SQLUPDATE, on the target file. If the update is unsuccessful, the program executes SQLINSERT. The structure of this code helps ensure that the target file remains synchronized with the monitored file. Attempting an update first, even when JNTYP contains PT, is analogous to performing a CHAIN in native I/O to make sure a record doesn’t exist before trying to add it. Likewise, if you try an update on the target file and the record doesn’t exist there, you can add the record to the target file even though the journal entry type JNTYP was UP.

 

 

After a successful update to the target file, the program commits the changes to the target database and returns control to the monitor program to await the next change in the monitored file.

 

 

BEFORE and AFTER considerations—When I placed the BEFORE and AFTER comparison data structures into the program, I was thinking of situations in which I don’t want the interface to update a target unless certain field change conditions are met. Many interface applications are interested only in changes to specific fields or even specific values in specific fields; to such applications, a record update carries no meaning. Using the before and after images makes it easy to compare changes to the monitored file, field by field, as a prerequisite for updating the target file.

 

 

This concept can also be counterproductive. Suppose you want to “seed” the target file or run a periodic “sweep” to make sure the target file is synchronized to the monitored file? An easy way to do so would be to perform a database update to each record in the monitored file, thus causing the monitor program to receive an update journal entry on each record. Easy, that is, provided the program will update the target file regardless of the lack of difference between the before and after images! You may want to consider your requirements and remove the BEFORE and AFTER comparisons.

 

 

Managing the control data area—An important job for the RPG program is to update the control data area with the sequence number of the last entry processed. This facilitates a smooth startup and ensures that each receiver entry will be processed only once under normal conditions. The program compares the journal sequence received with the last sequence number processed as a precondition.

 

 

The program also examines the contents of the status byte, ##STAT, from the data area to see if shutdown has been requested. A nonblank value in the status byte causes the program to execute the $EOJ subroutine, passing a 9 back to the RCVJRNE command (via the STATUS parameter) to indicate that RCVJRNE should terminate.

 

 

Ending the Process

 

 

To complete this application, you need a graceful way to stop the program. Under normal conditions, you want the program to remain operational as long as the subsystem in which it is running is active. Of course, the simplest way to end the journal monitor is to issue the ENDJOB command; however, this solution isn’t very elegant since it usually results in an abend message!

 

 

To shut down the journal monitor cleanly, call the ENDMONC program, shown in Figure 7, passing the name of the monitored transaction file. ENDMONC sets an end-ofjob marker in the control data area and then sends a manual journal entry to the journal to trigger the RCVJRNE command and call JRMONR. When JRMONR sees the manual journal entry, it executes the end-of-job routine at $EOJ and terminates itself.

 

 

Additional Comments

 

 

Obviously, it is important to keep the file journal operational whenever the system is available for user transactions. If you decide to implement a data transfer application

 

 

based on this concept, it’s a good idea to ensure that journals are always active for included files.

 

 

JRMONC (Figure 5) is an asynchronous task. It is intended to be called at system startup and remain active until the subsystem in which it is running has ended. You may want to create a small, dedicated subsystem for this type of program, which is neither truly batch nor interactive, and make JRMONC an auto-start job. Using a dedicated subsystem for asynchronous tasks makes these tasks easier to control and simplifies managing system resources. If you prefer to start the monitor from a command line, use the following command:

 

SBMJOB CMD(CALL JRMONC) +

PARM(PMP) +

JOB(MON_PMP)

 

The Change Journal (CHGJRN) command, which attaches a new receiver to the journal, should be added to the JRMONC monitor program. Remember to change journal receivers frequently, especially with files having high-transaction volumes. The CHGJRN command could use keyword parameters retrieved from the control data area. Including this command in the program means that the journal receiver will be changed each time the program is started. Use the *GEN parameter on the JRNRCV keyword; it allows the system to generate the journal receiver name and saves you the trouble of coding sequence logic.

 

 

To simplify the RPG example, I omitted error handling and additional transaction reporting issues, but they are obviously important considerations. A good option for error management is to write an error log on the AS/400 whenever an SQL error is encountered, and manage the errors on an exception basis.

 

 

You also need to assess the likelihood that the journal sequence number will reach 10 billion and roll over to 1; if this is likely, you will need to create a response for managing this situation!

 

 

You’ve Made the Connection

 

 

Once you’ve created the sample objects in this article, try it out. Remember to “seed” the control data area, and then use DFU or some other file utility to add, change, and delete test records in the PMP file. By the time you’ve started SQL and queried the records in the target file, your information will be waiting for you!

 

 

The OS/400 journal management facility provides a powerful, flexible means for creating real-time, seamless interfaces between systems. If you’ve been frustrated by some of the shortfalls of traditional approaches, try adapting the sample code presented here to your own situation.

 

 

References

 

 

Backup and Recovery V3R2 (SC41-5304, CD-ROM QB3ALE01) Control Language Reference V3R2 (SC41-5722, CD-ROM QB3AUP01)

 

*===============================================================

* To compile:

*

* CRTPF FILE(XXX/PMP) SRCFILE(XXX/QDDSSRC)

*

*===============================================================

A R RF$PM TEXT(‘PART MASTER RECORD’)

A PMPART 15 COLHDG(‘Part’ ‘Number’)

A PMDESC 30 COLHDG(‘Part’ ‘Description’)

A PMIUM 2 COLHDG(‘Inventory’ ‘U/M’)

A PMONHQ 8 0 COLHDG(‘Quantity’ ‘On’ ‘Hand’)

A PMORDQ 8 0 COLHDG(‘Quantity’ ‘On’ ‘Order’)

A PMSHPQ 8 0 COLHDG(‘Quantity’ ‘Shipped’)

A

A K PMPART *===============================================================

* To compile:

*

* CRTPF FILE(XXX/JRNCTL) SRCFILE(XXX/QDDSSRC)

*

*===============================================================

A R FJRNCTL TEXT(‘control data area layout’)

A ##JRNM 10 COLHDG(‘Journal’ ‘Name’)

A ##LIBN 10 COLHDG(‘Library’ ‘Name’)

A ##JNSQ 10S 0 COLHDG(‘Journal’ ‘Sequence#’)

A ##PGMN 10 COLHDG(‘Exit’ ‘Program’ ‘Name’)

A ##STAT 1 COLHDG(‘Operational’ ‘Status’) /*==================================================================*/

/* To compile: */

/* */

/* CRTCLPGM PGM(XXX/JRMONC) SRCFILE(XXX/QCLSRC) */

/* */

/*==================================================================*/

/* SAMPLE INTERFACE JOURNAL MONITOR */

 

Figure 1: Part Master file DDS

 

CREATE TABLE xxx/PART_MASTER

(PART_NUMBER CHAR (15)

NOT NULL,

PART_DESCRIPTION CHAR (30)

NOT NULL,

INVTY_UNIT_MEASURE CHAR(2)

NOT NULL,

ON_HAND_QUANTITY DEC (8,0)

NOT NULL,

ON_ORDER_QUANTITY DEC (8,0)

NOT NULL,

QUANTITY_SHIPPED DEC (8,0)

NOT NULL,

PRIMARY KEY (PART_NUMBER))

CRTJRNRCV JRNRCV(xxx/DWJRN00001)
CRTJRN JRN( xxx/DWJRN) +

JRNRCV(xxx/DWJRN00001)
STRJRNPF FILE(xxx/PMP) +

JRN(xxx/DWJRN) +

IMAGES(*BOTH) +

OMTJRNE(*OPNCLO)

 

Figure 2: SQL command to create the reporting file

 

 

Figure 3: Creating and starting a journal

 

 

Figure 4: JRNCTL—control data area definition

 

PGM &FILE

DCL &FILE *CHAR 10 /* file name */

DCL &EXITPGM *CHAR 10 /* exit program name */

DCL &DTAARA *CHAR 10 /* data area name */

DCL &JRNNAME *CHAR 10 /* journal name */

DCL &JRNSEQ *CHAR 10 /* last sequence # +

number processed */

DCL &STATUS *CHAR 1 /* operational status */

/* retrieve journal receiver information */

CHGVAR &DTAARA VALUE(&FILE *TCAT ‘@’)

RTVDTAARA DTAARA(&DTAARA (1 10)) RTNVAR(&JRNNAME)

RTVDTAARA DTAARA(&DTAARA (21 10)) RTNVAR(&JRNSEQ)

RTVDTAARA DTAARA(&DTAARA (31 10)) RTNVAR(&EXITPGM)

RTVDTAARA DTAARA(&DTAARA (41 1)) RTNVAR(&STATUS)

/* make sure the operation status byte is not set for shutdown */

CHGDTAARA DTAARA(&DTAARA (41 1)) VALUE(‘ ‘)

RCVJRNE JRN(&JRNNAME) EXITPGM(&EXITPGM) TOENT(*NONE) +

JRNCDE((R) (U *IGNFILSLT)) ENTTYP(UP PT +

DL UB ‘00’) DELAY(*NEXTENT *CLS)

ENDPGM *===============================================================

* To compile:

*

* CRTSQLRPGI OBJ( xxx/JRMONR) SRCFILE(xxx/QRPGLESRC)

* RDB(yyy) OPTION((*XREF)) DBGVIEW(*SOURCE)

*

* xxx is the object and source library

* yyy is the relational database name

* (for AS/400, this is ordinarily the system name)

*

* If the target system is not an AS/400, also specify

* OPTION(*SQL)

*

*===============================================================

D* Data structures for “before” and “after” record images

DBEFORE E DS EXTNAME(PMP) PREFIX(XX:2)

DAFTER E DS EXTNAME(PMP)

D* Journal monitor control data area

DCONTRL E DS 1024 EXTNAME(JRNCTL)

D* Journal entry data structure

DJRNENT DS

DJNSEQ 6 15 0

DJNCDE 16 16

DJNTYP 17 18

DJNDTA 126 187

D* Variables

DSQLADD S 250

DSQLCHG S 250

DSQLDEL S 250

DSTATUS S 1

D@COUNT S 5 0 INZ(0)

D@@MXCT S 5 0 INZ(1)

DYES S 3 INZ(‘YES’)

DNO S 3 INZ(‘NO ‘)

DERROR S 3 INZ(‘NO ‘)

DWARNING S 3 INZ(‘NO ‘)

C***********************************************************

C** /MAINLINE **

C***********************************************************

C**

C *DTAARA DEFINE PMP@ CONTRL AARA

 

Figure 5: Journal monitor program JRMONC

 

C *LOCK IN *DTAARA

C**

C ##STAT IFNE *BLANKS

C EXSR $EOJ

C ELSE

C**

C JNSEQ IFGT ##JNSQ

C**

C** Initialize the SQLCA data structure; update the data area

C**

C CLEAR SQLCA

C MOVE JNSEQ ##JNSQ

C**

C** Process receiver data into the data structures

C**

C JNTYP IFEQ ‘UB’

C CLEAR BEFORE

C MOVE JNDTA BEFORE

C RETURN

C ELSE

C**

C CLEAR AFTER

C MOVE JNDTA AFTER

C ENDIF

C**

C JNTYP IFEQ ‘DL’

C/EXEC SQL EXECUTE SQLDELETE USING

C+ :PMPART

C/END-EXEC

C ELSE

C*

C JNTYP IFEQ ‘PT’

C JNTYP OREQ ‘UP’

C BEFORE ANDNE AFTER

C*

C/EXEC SQL EXECUTE SQLUPDATE USING

C+ :PMDESC,:PMIUM,:PMONHQ,

C+ :PMORDQ,:PMSHPQ,:PMPART

C/END-EXEC

C*

C* Test for “record not found” and try to insert if true

C*

C SQLCOD IFEQ 100

C SQLSTT OREQ ‘02000’

C*

C/EXEC SQL EXECUTE SQLINSERT USING

C+ :PMPART,:PMDESC,:PMIUM,

C+ :PMONHQ,:PMORDQ,:PMSHPQ

C/END-EXEC

C*

C ENDIF

C ENDIF

C ENDIF

C*

C* Commit database update if SQL was successful

C*

C ERROR IFEQ NO

C WARNING ANDEQ NO

C/EXEC SQL COMMIT

C/END-EXEC

C**

C/EXEC SQL PREPARE SQLINSERT FROM :SQLADD

C/END-EXEC

C/EXEC SQL PREPARE SQLUPDATE FROM :SQLCHG

C/END-EXEC

C/EXEC SQL PREPARE SQLDELETE FROM :SQLDEL

C/END-EXEC

C ENDIF

C ENDIF

C ENDIF

C*

C OUT *DTAARA

C UNLOCK *DTAARA

C**

C RETURN

C**

C**********************************************************

C** /$EOJ - Last record subroutine **

C**********************************************************

C**

C $EOJ BEGSR

C**

C* Disconnect by releasing the connection and committing

C**

C/EXEC SQL COMMIT

C/END-EXEC

C**

C/EXEC SQL RELEASE CURRENT

C/END-EXEC

C**

C* Update the control data area, set the exit flag, set on LR

C**

C MOVE *BLANKS ##STAT

C MOVE JNSEQ ##JNSQ

C**

C MOVE ‘9’ STATUS

C MOVE *ON *INLR

C**

C XEOJ ENDSR

C**

C************************************************************

C** /*INZSR First time subroutine

C************************************************************

C *INZSR BEGSR

C**

C *ENTRY PLIST

C PARM JRNENT

C PARM STATUS

C*

C*****/EXEC SQL

C*****+ CONNECT TO :OCINST USER :OCUSID USING :OCPWRD

C*****/END-EXEC

C**

C** Establish the SQLADD, SQLCHG and SQLDEL variables

C**

C EVAL SQLADD=’INSERT INTO ‘ +

C ‘PART_MASTER ‘ +

C ‘VALUES(‘ +

C ‘?,?,?,?,?,?)’

C*

C EVAL SQLCHG =’UPDATE PART_MASTER ‘ +

C ‘SET PART_DESCRIPTION = ?,’ +

C ‘INVTY_UNIT_MEASURE = ?,’ +

C ‘ON_HAND_QUANTITY = ?,’ +

C ‘ON_ORDER_QUANTITY = ?,’ +

C ‘QUANTITY_SHIPPED = ? ‘ +

C ‘WHERE PART_NUMBER = ?’

C*

C EVAL SQLDEL =’DELETE FROM PART_MASTER ‘ +

C ‘WHERE PART_NUMBER = ?’

C**

C** Prepare SQL statement for first block

C**

C/EXEC SQL PREPARE SQLINSERT FROM :SQLADD

C/END-EXEC

C/EXEC SQL PREPARE SQLUPDATE FROM :SQLCHG

C/END-EXEC

C/EXEC SQL PREPARE SQLDELETE FROM :SQLDEL

C/END-EXEC

C**

C XINZSR ENDSR /*==================================================================*/

/* To compile: */

/* */

/* CRTCLPGM PGM(XXX/XXX001CL) SRCFILE(XXX/QCLSRC) */

/* */

/*==================================================================*/

PGM &FILE

DCL &FILE *CHAR 10 /* file name */

 

Figure 6: Journal processor program JRMONR

 

DCL &DTAARA *CHAR 10 /* data area name */

DCL &JRNNAME *CHAR 10 /* journal name */

/* retrieve the journal control data area */

CHGVAR &DTAARA VALUE(&FILE *TCAT ‘@’)

RTVDTAARA DTAARA(&DTAARA (1 10)) RTNVAR(&JRNNAME)

/* set the data area shutdown flag */

CHGDTAARA DTAARA(&DTAARA (41 1)) VALUE(‘9’)

SNDJRNE JRN(&JRNNAME)

ENDPGM

 

Figure 7: Command processor ENDMONC

 

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: