21
Sat, Dec
3 New Articles

Referential Integrity & Triggers in DB2/400

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Brief: Enforcing database integrity used to mean coding logic into every application program. V3R1 allows you to enforce these rules at the database level by using referential integrity and triggers. This article discusses design considerations and shows you practical examples for implementing these powerful new capabilities into your database.

As an AS/400 programmer, you're well aware that something as simple as an upsurge in sales can trigger all kinds of changes to your business applications. All of a sudden, you need to address problems in the order entry system. Data integrity needs to be enforced between the master files for the order-entry system. The Accounts Receivable department wants greater flexibility in managing customers who exceed their credit limit, and the Sales department wants to implement special offers to promote a new group of products.

Each of these problem areas represents the statement of a business rule that needs to be enforced by the order-entry system. As a programmer, it's your job to implement a solution that enforces these business rules.

In the past, you enforced business rules by adding code to existing application programs or by writing new programs to handle the functions. However, this can be a complex, lengthy process, especially if the number of existing programs that modify and manipulate the data in these files is large or the application is implemented on multiple platforms. In addition, application program code is just one of a variety of mechanisms available to modify and manipulate data; therefore, it offers only a partial solution to enforcing business rules.

Referential integrity and triggers offer an alternative that enforces these business rules no matter what mechanism is used to modify and manipulate data. They are two of the most important enhancements to DB2/400 in V3R1, so every AS/400 programmer needs to know how to use them effectively. This article shows you how. It begins by defining referential integrity and triggers at a conceptual level. After that, the article presents a detailed case study of the scenario just presented. Finally, you'll learn how to use referential integrity and triggers to solve the problems that the case study presents. Along the way, you'll become familiar with the key concepts you need to understand to effectively implement these two new database functions.

Referential Integrity

The first database function we'll examine is called referential integrity. Referential integrity lets database developers ensure that data is consistent across physical files in the DB2/400 database. With referential integrity, developers define constraints or rules, also called referential constraint relationships. These constraints are applied at the physical file level, ensuring that every record in file A has a corresponding record in file B. Any program that inserts, deletes, or updates a record will be prevented from doing so if it violates a constraint. This makes referential integrity a powerful enhancement to OS/400.

In the past, database administrators had to anticipate all the mechanisms through which database consistency could be compromised. Then they wrote application logic to handle each potential compromise. Now that DB2/400 has referential integrity, it can automatically enforce database constraints no matter what mechanism is used to manipulate data.

Triggers

The second function we'll examine is called triggers. The DB2/400 trigger facility lets developers automatically invoke a program whenever a change is made to a physical file or table. A trigger fires in response to a change made to a file-a record is added, changed, or deleted. This makes triggers a powerful tool to help enforce business rules.

Case Study: Order Entry System

Once we know that both referential integrity and triggers are available, it's surprising how it colors our view of the problems we mentioned in the order- entry system. Let's look at our case study problems in more detail.

Our order-entry system is comprised of a large number of complex programs, but it only has four physical files. These order-entry files are Order Header (ORDR_HDR), Order Detail (ORDR_DTL), Item Master (ITEM_MSTR), and Customer Master (CUST_MSTR). The DDS for these files is included in Figures 1-4 along with some sample data.

The first problem in our case study involves data integrity. Data integrity needs to be enforced between all of the order-entry master files. Unfortunately, the Order Header file contains orders for customer number 01102, but there's no matching customer number in the customer master file. And that's not all. The Order Detail file lists orders for item number 1209, but the item master file has no matching item number. To complicate matters further, order number 435 in the Order Detail file has no matching order header record. Not only do we need to fix these problems, but the data integrity of the entire system needs to be enforced so these conditions cannot recur. Previously, enforcing this data integrity would require a lot of analysis to identify those programs that updated any of the files and a great deal of programming to enforce the integrity of any access to them. This is a primary candidate for implementing the referential integrity functions of DB2/400.

Our second problem deals with the Accounts Receivable department. They need greater control to manage customers who exceed their credit limit. Simply flagging a customer's order in a report is no longer sufficient. When a customer exceeds his credit limit, the Accounts Receivable department should be notified immediately, so someone can review each situation individually and take appropriate action. Previously, identifying the myriad of conditions in which a customer might exceed a set credit limit could result in a long series of programming changes. This business requirement is a prime candidate for DB2/400's trigger facility.

Our third problem centers around a new policy from the Sales department. They want to implement special offers to promote a new group of products. Since these promotional offers are going to have a special introductory price, there is a maximum number of promotions a customer can purchase on each order. Such simple requirements, in the past, would have necessitated a complete revamping of the order entry system, and some detailed coding to implement the promotions. But this requirement can be easily solved just by using DB2/400's trigger function.

Referential Integrity

First, turn your attention to the data inte-grity problems between the Customer Master and Order Header files, the Item Master and Order Detail files, and the Order Header and Order Detail files. You can add one referential constraint relationship to the Order Header file and two to the Order Detail file to eliminate the data integrity problems in the four files used by the order-entry system.

Referential integrity will then ensure the following:

o Every customer number in the Order Header file has a matching Customer Number in the customer master file.

o Every item number in the Order Detail file has a matching item number in the Item Master file.

o Every order number in the Order Detail file has a matching order number in the Order Header file.

But how does this work? Before delving into the detailed solution, let's explore referential integrity in more detail to provide the proper foundation to understand the solution.

Referential Constraint Relationships

Referential integrity identifies relationships between files by establishing constraints between them. This is known as a referential constraint relationship, and it works between two physical files. The primary file in the relationship is called the parent file. The secondary file in the relationship is called the dependent file.

The relationship between the parent file and the dependent file is defined by the keyed access paths for these files. A parent key is a keyed access path with unique keys for the parent file. A foreign key is a keyed access path for the dependent file, and it may or may not have unique keys. The parent key and foreign key must have identical attributes.

The referential constraint relationship states that every non-null foreign key value from the dependent file must have a matching parent key value from the parent file. (Null means there is no value in a field, which is different than zero or blank. It is a special way for a relational database to represent missing information.) In other words, a record in the dependent file cannot exist unless its foreign key value has a matching parent key value in a record from the parent file.

You define a referential constraint relationship by adding a referential constraint to the dependent file. The constraint then becomes an attribute of the file to which it is added. To add a constraint to a dependent file, the parent file, parent key access path, and dependent file must already exist.

When the constraint is added, it is given a name. At the same time, the foreign key is defined, the parent key and parent file are identified, and any constraint rules are specified.

Referential Constraint Rules

Referential constraint rules define the action that occurs when a file is altered. These rules focus on the three normal functions of database management: record insertion (add), record update or modification (change), and record deletion (delete). Specific rules are defined for each function. Referential constraint rules can apply to records in both the parent file and the dependent file. The "Referential Integrity Rules" sidebar (page 116) shows all the actions allowed when records are inserted, updated, or deleted in a dependent or parent file.

Command Interface

If you need to see what constraints have already been placed on a file, you can use the Display File Description (DSPFD) command to list them as shown in 5. Other specific commands to add and manage referential constraints include.

If you need to see what constraints have already been placed on a file, you can use the Display File Description (DSPFD) command to list them as shown in Figure 5. Other specific commands to add and manage referential constraints include.

 o ADDPFCST-Add Physical File Constraint. o RMVPFCST-Remove Physical File Constraint. o CHGPFCST-Change Physical File Constraint. o WRKPFCST-Work with Physical File Constraint. o EDTCPCST-Edit Check Pending Constraint. o DSPCPCST-Display Check Pending Constraint. 

These commands are grouped together on the OS/400 Constraint Command Menu accessed with GO CMDCST. SQL statements that perform similar functions are also supplied in OS/400 but are not covered in this article.

Implementation Considerations

You can add referential constraints only to single-member physical files; multimember files are not supported. Referential constraints cannot be added to AS/400 system files, source files, or program-described files. A referential constraint also cannot cross an auxiliary storage pool (ASP) boundary. The limit is 300 referential constraint relationships per file. In our order-entry case study, this limitation is obviously not going to be a problem.

So what happens if the referential integrity of a file is breached? If an application program takes an action that violates a referential constraint, an escape message and status code that the programmer must handle are passed back to the program. In other words, it's the same process you've grown accustomed to. When an application program attempts a database activity that does not successfully complete, the application program terminates unless the programmer handles the exception.

This type of control is exactly what we need to implement in order to enforce data integrity to solve our first order-entry problem.

Order Header/Customer Master Data Integrity

To ensure that every customer number in the Order Header file has a matching customer number in the Customer Master File (CUST_MSTR), we'll add the following referential constraint to the Order Header file to provide data integrity for the customer number (CUS_NBR) field:

 ADDPFCST FILE(ORDR_HDR) + TYPE(*REFCST) KEY(CUS_NBR) + CST(CUS_NBR_CST) + PRNFILE(CUST_MSTR) + PRNKEY(CUS_NBR) + DLTRULE(*NOACTION) + UPDRULE(*NOACTION) 

The dependent file is ORDR_HDR, the foreign key is CUS_NBR, and the constraint name is CUS_NBR_CST. The parent file is CUST_MSTR, the parent key is CUS_NBR, and the referential constraint rule for update and delete is *NOACTION.

Order Detail/Item Master Data Integrity

To ensure that every item number in the Order Detail file has a matching item number in the Item Master file, we add the following referential constraint to the Order Detail file to provide data integrity for the item number (ITEM_NBR) field:

 ADDPFCST FILE(ORDR_DTL) + TYPE(*REFCST) KEY(ITEM_NBR) + CST(ITEM_NBR_CST) + PRNFILE(ITEM_MSTR) + PRNKEY(ITEM_NBR) + DLTRULE(*NOACTION) + UPDRULE(*NOACTION) 

The dependent file is ORDR_DTL, the foreign key is ITEM_NBR, and the constraint name is ITEM_NBR_CST. The parent file is ITEM_MSTR, the parent key is ITEM_NBR, and the referential constraint rules for update and delete are *NOACTION.

Order Header/Order Detail Data Integrity

Finally, to ensure that every order number in the Order Detail file has a matching order number in the Order Header file, we add the following referential constraint to the Order Detail file to provide data integrity for the order number (ORD_NBR):

 ADDPFCST FILE(ORDR_DTL) + TYPE(*REFCST) KEY(ORD_NBR) + CST(ORD_NBR_CST) + PRNFILE(ORDR_HDR) + PRNKEY(ORD_NBR) + DLTRULE(*NOACTION) + UPDRULE(*NOACTION) 

The dependent file is ORDR_DTL, the foreign key is ORD_NBR, and the constraint name is ORD_NBR_CST. The parent file is ORDR_HDR, the parent key is ORD_NBR, and the referential constraint rules for update and delete are *NO-ACTION.

Implementation Results

The result of these three actions provides the required data integrity for the order entry system. It guarantees that the following business rules will be enforced:

o Every customer number in the Order Header file will have a matching customer number in the Customer Master file.

o Every item number in the Order Detail file will have a matching item number in the Item Master file.

o Every order number in the Order Detail file will have a matching order number in the Order Header file.

Note that the no program logic was changed, and no actual coding has taken place. All of the integrity checks are implemented at the database level and are now an integral part of the database. Applications that are accessing the database-whether they are native to the AS/400 or client/server applications on a networked PC-will be subjected to these same constraints.

Triggers

Let's move on to the next two problems with the order entry system: the credit limit problem and the sales promotion problem.

When a customer exceeds his credit limit, the Accounts Receivable department needs to be notified immediately. To accomplish this a trigger will be added to the Customer Master file that will fire after any record in the Customer Master file is updated. This trigger will activate a program which compares the customer's credit limit to the balance owed. If the balance owed exceeds the credit limit, this trigger program will send a message to the Accounts Receivable department so someone can review the situation and take appropriate action.

The same trigger facility will work for the sales promotion problem. Two triggers are added to the Order Detail file, to fire after any record in the Order Detail file is updated. These triggers will activate a program which will compare the quantity ordered for an item number to a control number for that item number in the Item Master file. The control number is the maximum number of units that can be ordered for a given item number. If the control number is zero, there is no limit on the item. If the quantity ordered exceeds the control number, the trigger program will send a message to the Sales department so appropriate action can be taken.

How does this work? Before implementing the detailed solution let's explore triggers to provide the proper foundation to understand the solution.

Trigger Components

A trigger is made up of four components: a base file, a trigger event, a trigger time, and a trigger program.

The base file is the physical file to which the trigger is added as an attribute.

The trigger event is the condition that causes the trigger to fire. The trigger event can be the insertion (add), update (change), or deletion (delete) of a record in the base physical file or in a logical file built over the base physical file.

The trigger time determines when the trigger fires, causing the trigger program to run. The trigger time can be immediately before or immediately after the trigger event takes place.

The trigger program is a program that runs at the trigger time and performs the action desired by the firing of the trigger. A trigger program can contain SQL statements, high-level language statements, and most OS/400 commands.

When the trigger program is called, it receives parameter information that includes the identification of the file and record that are currently being accessed, along with a before and/or after image of the record being accessed.

Adding and Removing Triggers

A trigger is added to a physical file or table with the Add Physical File Trigger (ADDPFTRG) command and can be removed with the Remove Physical File Trigger (RMVPFTRG) command. SQL does not support triggers because the SQL standard is still in development.

Implementation Considerations

Each physical file can have a total of six triggers defined for it. The reason for this limit becomes clear when you consider that there are three trigger events: insert, update, and delete. Each trigger event can have two trigger times-one before and one after the trigger event. Therefore, three trigger events multiplied by two trigger times equals a maximum of six triggers per file.

Even though triggers are added to physical files, they can also be fired by an insertion, update, or deletion in a logical file built over the physical file that contains the trigger.

The firing of one trigger can cause another trigger to fire, which can result in cascading or nested triggers, looping triggers, and recursive triggers. Since all of these situations are allowed, infinite loops are possible. Obviously, any of the above situations can have a severe impact on system resources. Careful planning, design, and testing will help to avoid these situations. DB2/400 allows a maximum of 200 levels of nested triggers.

Triggers are a relatively simple but powerful DB2/400 function. The impact they can have on performance ranges from minimal when a single trigger is fired to perform a simple operation, to significant when firing one trigger causes a series of triggers to be fired in a trigger cascade.

However, the performance impact of triggers should be minimal if an operation is removed from the application program level and implemented in a trigger program at the database level. This assumes that the same operation performed at the application level will be performed in the trigger program at the database level.

Triggers can be used to improve performance in client/server environments by reducing the conversation flow between the client and the server.

Now that you understand the theory and concepts behind referential integrity and triggers, we can review the solutions to our case-study problems which use the triggers facility.

Credit-Limit Management

To provide better management of customers that exceed their credit limit, we add the following trigger to the Customer Master file which fires after a record in the Customer Master file is updated:

 ADDPFTRG FILE(CUST_MSTR) + TRGTIME(*AFTER) + TRGEVENT(*UPDATE) + PGM(CHK_CREDIT) 

The trigger time is *AFTER and the trigger event is *UPDATE. The trigger program (CHK_CREDIT) compares the customer's credit limit to the balance owed. If the balance owed exceeds the credit limit, the trigger program sends a message to the Accounts Receivable department.

Controlling Promotions

To provide management of the Sales department's new promotional offerings, we add two triggers to the Order Detail file which fire after a record in the Order Detail file is added or updated.

 ADDPFTRG FILE(ORDR_DTL) + TRGTIME(*AFTER) + TRGEVENT(*UPDATE) + PGM(CHK_CONTRL) ADDPFTRG FILE(ORDR_DTL) + TRGTIME(*AFTER) + TRGEVENT(*INSERT) + PGM(CHK_CONTRL) 

The trigger time is *AFTER and the trigger events are *INSERT and *UPDATE. The trigger program (CHK_CONTROL) compares the quantity ordered for an item number to the control number for that item number in the item master file. The control number is the maximum number of units that can be ordered for a given item number.

Advanced Database Functions

This case study shows how referential integrity and triggers allow business rules to be defined and implemented at the database level.

Referential integrity is a mechanism that ensures data integrity between two physical files in a relational database. It deals specifically with data integrity and nothing else.

Triggers, on the other hand, can be used to perform any function, limited only by your imagination and programming ability. For example, triggers can be used to perform a complex data validation process or to generate an audit trail.

The important thing to understand is that the business rules for the order- entry system were implemented at the database level-not at the application- program level. This was easily accomplished by adding the referential constraints and triggers to three of the physical files used by the order-entry system. It was not necessary to duplicate code to enforce the business rules in each application program that modifies or manipulates data in the order-entry files. Thus referential integrity and triggers, two of the new, advanced database functions found in DB2/400, can provide a very high level of programmer productivity.

Skip Marchesani is the author of "DB2/400: The New AS/400 Database." He can be reached through Midrange Computing.

REFERENTIAL INTEGRITY RULES

Record Insertion Rule

A record can be inserted in a dependent file only when its foreign key is null, or there is a matching parent key in the parent file.

Record Update Rules No Action (default)

A parent key in a parent-file record cannot be changed if the original parent key matches a foreign key in a dependent-file record.

A foreign key in a dependent-file record cannot be changed unless there is a parent-file record with a parent key that matches the changed foreign key.

Restrict -Same as No Action

The difference between the "No Action" and the "Restrict" rule is in the timing of the integrity check. If Restrict has been specified as the rule for an update, the integrity check is immediate; it takes place before the update request is performed. If No Action has been specified as the rule for an update, the integrity check is not immediate. Other events (namely, a nested transaction group, which is one or more database operations that occur before an integrity check is made for a referential constraint on a dependent file) are allowed to occur. The integrity check takes place at the end of the update request or nested transaction group.

Record Deletion Rules No Action (default)

A parent-file record cannot be deleted if its parent key matches a foreign key in a dependent-file record.

Restrict -Same as No Action

The difference between No Action and Restrict is (as in the update rule) a timing of the integrity check for the file. Restrict checks the integrity immediately, and No Action delays the check for other activities.

Cascade

The cascade option for record deletion is extremely powerful. Cascading a deletion on a parent-file record causes the deletion of all dependent-file records with a matching foreign key.

Set Null

The deletion of a parent-file record causes all dependent-file records with a matching foreign key to have their null-capable fields within the foreign key set to null.

At least one field in the foreign key of the dependent file must be null- capable. (DDS allows the definition of null-capable fields with the use of the Allow Null Value (ALLWNULL) keyword.)

Set Default

The deletion of a parent-file record causes all dependent-file records with a matching foreign key to have all fields within the foreign key set to their default values.

The default foreign key in the dependent file must have a matching parent key in the parent file.

In DB2/400, the default value used for a field is the system default value unless the file has been described with field-level default values using the Default (DFT) keyword in DDS. The system default values are zero (00) for non- null numeric fields, blank for non-null character fields, and null for all null-capable fields.

Implementation Considerations

Some of the referential constraint rules require that the parent and dependent files be journaled. This is done to ensure data integrity and to enable the DB2/400 Database Manager to automatically recover from or roll back a transaction group that has not successfully completed.

Another important issue to consider is the current state of the database in which referential integrity will be implemented. If the files in the database are not designed correctly, it may be difficult or even impossible to implement referential integrity. Correctly designed master files should contain single fact records that are in third normal form. This structure is readily adaptable to referential integrity.

Certain referential integrity functions can be resource-intensive. Delete Cascade, Delete Set Null, and Delete Set Default may require significant system resources if many records in a dependent file match a deleted record in a parent file. Combining referential integrity with trigger functions should also be approached with caution (see the discussion on one trigger firing another in this article). You should carefully analyze each referential integrity and trigger combination to avoid long cycles or infinite loops.

Referential integrity is a complex and sophisticated DB2/400 function. The performance impact of referential integrity should be minimal if it is only used to replace similar functions currently implemented at the application level. The implementation of additional referential integrity functions at the database level may require additional system resources and could have a negative impact on system performance.


Referential Integrity & Triggers in DB2/400

Figure 1 Order Header File

 *=============================================================== * To compile: * * CRTPF FILE(XXX/ORDR_HDR) SRCFILE(XXX/QDDSSRC) * *=============================================================== *. 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 A UNIQUE A R HDR_REC TEXT('Order Header Record') A ORD_NBR 5 0 TEXT('Order Number') A CUS_NBR 5 0 TEXT('Customer Number') A ORD_AMT 9 2 TEXT('Order Amount') A K ORD_NBR 
Referential Integrity & Triggers in DB2/400

Figure 2 Order Detail File

 *=============================================================== * To compile: * * CRTPF FILE(XXX/ORDR_DTL) SRCFILE(XXX/QDDSSRC) * *=============================================================== *. 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 A UNIQUE A R DTL_REC TEXT('Order Detail Record') A ORD_NBR 5 0 TEXT('Order Number') A LINE 3 0 TEXT('Line Number') A ITEM_NBR 5 0 TEXT('Item Number') A QUAN 5 0 TEXT('Quantity') A K ORD_NBR A K LINE 
Referential Integrity & Triggers in DB2/400

Figure 3 Item Master File

 *=============================================================== * To compile: * * CRTPF FILE(XXX/ITEM_MSTR) SRCFILE(XXX/QDDSSRC) * *=============================================================== *. 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 A UNIQUE A R ITEM_REC TEXT('Item Master Record') A ITEM_NBR 5 0 TEXT('Item Number') A DESC 30 TEXT('Description') A PRICE 7 2 TEXT('Price') A ON_HAND 7 0 TEXT('On Hand') A CNTRL 5 0 TEXT('Control Number') A K ITEM_NBR 
Referential Integrity & Triggers in DB2/400

Figure 4 Customer Master File

 *=============================================================== * To compile: * * CRTPF FILE(XXX/CUST_MSTR) SRCFILE(XXX/QDDSSRC) * *=============================================================== *. 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 A UNIQUE A R CUST_REC TEXT('Customer Record') A CUS_NBR 5 0 TEXT('Customer Number') A CUS_NAME 30 TEXT('Customer Name') A CRED_LMT 7 2 TEXT('Credit Limit') A BAL_OWE 7 2 TEXT('Balance Owed') A K CUS_NBR 
Referential Integrity & Triggers in DB2/400

Figure 5 Viewing Constraints with the DSPFD Command

 5763SS1 V3R1ME 940909 Display File Description File . . . . . . . . . . . : ORDR_DTL Library . . . . . . . . . : *LIBL Type of information . . . . : *CST File attributes . . . . . . : *ALL System . . . . . . . . . . : *LCL Processor . . . . . . . . . : IBM AS/400 Display File Description Processor Constraint Description Referential Constraint Constraint . . . . . . . . . . . . . . : CST ORD_NBR_CST Type . . . . . . . . . . . . . . . . : TYPE *REFCST Check pending . . . . . . . . . . . . : NO Constraint state . . . . . . . . . . : STATE ESTABLISHED *ENABLED Parent File Description File . . . . . . . . . . . . . . . . : PRNFILE ORDR_HDR Library . . . . . . . . . . . . . . : LIB ARLIB Parent key . . . . . . . . . . . . . : PRNKEY ORD_NBR Foreign key . . . . . . . . . . . . . . : FRNKEY ORD_NBR Delete rule . . . . . . . . . . . . . . : DLTRULE *NOACTION Update rule . . . . . . . . . . . . . . : UPDRULE *NOACTION 
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: