26
Tue, Nov
0 New Articles

SQL Triggers and Other Trigger Enhancements in V5

SQL
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

 

Read Trigger

Read triggers have been implemented in OS/400 to satisfy a U.S. federal statute (HIPAA) requiring that all access to patient records in the medical profession be logged and auditable.

Read triggers are supported only via external triggers and are implemented via the new read trigger event, and there is no support for a read trigger with SQL triggers. The reason for this is that a read trigger has a very specific function; it is not intended for general use because indiscriminate use can have a significant negative impact on system performance and throughput due to System i and iSeries performance characteristics being altered.

Any read-only access to a file with a read trigger defined—either user program or system function—will cause the trigger to fire. This means that using the Copy File (CPYF) command, the Display Physical File Member (DSPPFM) command, query products like Query/400 or SQL, or user-written display or report programs will cause a read trigger to fire every time a record is accessed in each of the above scenarios. If you access one million records, you just fired the read trigger one million times. Get the picture?

The use of a read trigger will disable some system functions that are used to optimize system performance and throughput. For example, a read trigger prevents adaptive blocking and double buffering when processing a file sequentially (arrival sequence) and asynchronous fetch of the next record in keyed sequence when processing a file sequentially by key or index. This means that records are read from the file one at time instead of in optimized blocks, dramatically increasing file I/O time.

The effect is that batch programs using these processing techniques on a file with a read trigger defined will run orders of magnitude longer. This negative impact on performance can be mitigated to a certain extent by increasing CPU power, adding memory, and adding disk arms.

Multiple Triggers per Database or Trigger Event

OS/400 releases prior to V5R1 limited a file to a maximum of six triggers per file. Many software providers now include triggers in their application packages, which can conflict with the trigger requirements of System i and iSeries users.

When conflicting trigger requirements occur, it can be very difficult or impossible to combine two or more trigger programs into one. IBM's V5R1 solution was to increase the limit of six triggers to 300 and to allow more than one trigger with the same trigger time and trigger event to be defined for a file.

When multiple triggers are defined for the same database event, the triggers are fired in the order they were created. The first trigger created is the first one fired; the last created is the last one fired.

This raises some interesting questions when a trigger has to be deleted and recreated. Suppose the first trigger on the execution list must be deleted and then recreated. It then moves from being first on the execution list to last on the list. The question that you must ask is if the trigger depends on its place in the execution list to function properly. If the answer to that question is "yes, the trigger firing order must be maintained," you will need to create two CL or SQL scripts: One script removes all the triggers for the file in question, and the second recreates all the triggers in the required sequence.

Named Triggers

The increase to 300 triggers per file means that the combination of trigger time and trigger event can no longer be used to identify a trigger. Starting in V5R1, both external and SQL triggers are given a name to provide unique identification when they are created. The trigger name must be unique within a given library (not per file) and can be a maximum of 128 characters long. If you do not provide a name when the trigger is created, DB2 UDB will create a default name, and I guarantee you will not like it!

To support the naming of a trigger, a trigger name and library parameters have been added to the ADDPFTRG and RMVPFTRG commands and the new CREATE TRIGGER and DROP TRIGGER SQL statements (more on these SQL statements later).

Change Physical File Trigger (CHGPFTRG) Command

The CHGPFTRG command changes the state of one or all triggers defined for a file or table. There are two possible trigger states: disabled or enabled. When a trigger is disabled, its corresponding trigger program will not be invoked when the trigger event is satisfied. When a trigger is enabled, its corresponding trigger program will be invoked when the trigger event is satisfied.

This eliminates the problem of having to delete a trigger to disable it and then recreate the trigger again to enable it in prior releases of OS/400. Following is an example of the CHGPFTRG command syntax.

CHGPFTRG   FILE(Lib_Name/File_Name)   TRG(Trigger_Name)   
          TRGLIB(Lib_Name)   STATE(*DISABLED)

System Catalog Trigger Enhancements

As an aid in managing the V5R1 trigger enhancements, IBM has added four new files to the System Catalog to log and store information about triggers:

  • SYSTRIGGERS contains one row for each trigger in a library for both external and SQL triggers.
  • SYSTRIGCOL contains one row for each column or field either implicitly or explicitly referenced in the WHEN clause or the SQL statements for an SQL trigger.
  • SYSTRIGDEP contains one row for each object referenced in the WHEN clause or SQL statements for an SQL trigger.
  • SYSTRIGUPD contains one row for each column identified in the UPDATE column list, if any.

SQL Triggers

SQL triggers are a V5R1 enhancement and use one or more SQL statements (instead of a user-provided or written program) within the trigger body to perform the desired action when the trigger fires. SQL trigger support in V5R1 is a superset of the support found in DB2 UDB Version 7.1 and provides an industry-standard method for defining and managing triggers that has a high degree of portability to other database management systems.

SQL triggers use IBM's SQL procedural language to implement the trigger implementation, and they also provide more granularity and function than external triggers with column or field-level triggers, row or record-level triggers, and (SQL) statement-level triggers.

An SQL trigger can be added to a file or table with the CREATE TRIGGER statement and can be removed with the DROP TRIGGER statement, or both can be done with the Database function in iSeries Navigator.

SQL Trigger Components

In V5R1, an external trigger has the following five components (remember that the trigger name was just added in V5R1): base file or table, trigger name, trigger event, trigger time, and trigger program.

An SQL trigger has the same first four components; however, the trigger program is replaced by five additional components: trigger granularity, transition variables, transition tables, trigger mode, and triggered action.

The base file or table is the physical file or table to which the trigger is added. The trigger name provides unique trigger identification within a library. The trigger event is the condition that causes the trigger to fire. It can be the insert of a new record or row, the delete of an existing row, the update of an existing row or column, or, in very limited circumstances, the read of an existing row (see "read trigger" earlier in this article). The trigger time is when the triggered action will be performed, either before or after the trigger event completes. The trigger granularity, in conjunction with the trigger event, determines what causes the trigger to fire. Granularity can be at the column or field level, row or record level, or (SQL) statement level.

Column-level triggers are an extension of the Update trigger event and are available only with SQL triggers. Only an update of those columns listed as part of the update trigger event will cause the trigger to fire and the triggered action to be performed.

The following SQL syntax shows how the column names are listed:

UPDATE OF   Column_Name1, Column_Name2,...

If no columns are listed in the UPDATE OF clause, then an update to any column defined in the row causes the associated trigger to fire.

With a row-level trigger, the associated trigger is fired and the triggered action is performed each time the trigger event is satisfied. If the trigger condition is never satisfied, the triggered action is never performed. An SQL trigger is defined as a row-level trigger with the FOR EACH ROW clause. An external trigger is implicitly a row-level trigger.

Statement-level triggers are available only with SQL triggers, and the triggered action is performed only once per trigger event, regardless of the number of rows processed. If the trigger event is never satisfied, the triggered action is still performed once at the end of the SQL statement processing. A statement-level trigger can be used only in conjunction with a trigger time of After and a trigger mode of DB2SQL (more on trigger mode later) and is defined with the FOR EACH STATEMENT clause.

Transition variables provide the same function as the before and after images in the trigger buffer used with external triggers. They provide qualification of the column names for the image of the single row that caused the trigger to fire, before and/or after the trigger event has completed.

Transition variables are not valid with statement-level triggers. They are defined with the OLD ROW clause for the before image and the NEW ROW clause for the after image.

The following SQL syntax is used to describe and reference transition variables as part of an SQL trigger.

...REFERENCING   OLD ROW   AS   Oldrow   NEW ROW   AS   Newrow
...
...WHERE   Newrow.Salary   >   Oldrow.salary   + 10000...

A transition table provides a function analogous to the before and after images in the trigger buffer used with external triggers and is a temporary table that contains the image of all rows affected before and/or after the trigger event completes. Since a single SQL statement can process multiple rows in a file, a mechanism is needed to be able to track/log the activity on those rows processed. Transition tables provide that capability.

A transition table can be used only in conjunction with a trigger time of After and a trigger mode of DB2SQL. It is defined with the OLD TABLE clause for a before image of all affected rows and the NEW TABLE clause for an after image of all affected rows.

The following SQL syntax is used to describe and reference a transition table as part of an SQL trigger.

...REFERENCING   OLD TABLE   AS   Old_Table_Name...
...
...(SELECT   COUNT(*)   FROM   Old_Table_Name)...

There are two trigger modes: DB2ROW and DB2SQL. A mode of DB2ROW causes the trigger to fire after each row operation and is valid only with row-level triggers. It is an exclusive function of DB2 UDB for System i and iSeries and is not available in other DB2 UDB implementations. A mode of DB2SQL causes the trigger to fire after all row operations are complete and is valid only with a trigger time of After. If it is specified for a row-level trigger, the triggered action is executed n times after all row operations, where n equals the number of rows processed. This is not as efficient as DB2ROW, since each row is effectively processed twice.

The triggered action is analogous to the trigger program in external triggers and has three parts: the SET OPTION clause, the WHEN clause, and the SQL trigger body:

The SET OPTION clause specifies the options that will be used to create the trigger.

The WHEN clause specifies the search or selection criteria or the execution criteria for the trigger body. In other words, it specifies when the SQL statements in the trigger body will be executed.

The SQL trigger body contains one or more SQL statements that perform the desired action when the trigger fires. Multiple SQL statements in the trigger body are delineated with the BEGIN and END statements. Each complete SQL statement in the trigger body must be ended with a semicolon (;).

The standard DDL and DML SQL statements—such as SELECT, INSERT, DELETE, and CREATE—can be used in the trigger body along with IBM's SQL procedural language.

An SQL trigger can be added to a file with the CREATE TRIGGER statement and removed with the DROP TRIGGER statement. The SQL syntax for these statements is shown below.

Detailed SQL Syntax for Create Trigger Statement

                                    +-NO CASCADE--+          
>>--CREATE TRIGGER--trigger-name--+-+-------------+-BEFORE-+-->  
                                  +-AFTER------------------+     

>--+--INSERT--------------------------+--ON--table-name------->  
   |--DELETE--------------------------|               
   +--UPDATE--+---------------------+-+                         
              |    +-,-----<-----+  |
              |    |             |  |
              +-OF-+-column-name-+--+

>-+---------------------------------------------------------+->  
  |             +-----------------------<-----------------+ |     
  |             |                                         | |
  |             |       +-ROW-+ +-AS-+                    | |   
  +-REFERENCING-+-+-OLD-+-----+-+----+-correlation-name-+-+-+
                  |                                     |
                  |     +-ROW-+ +-AS-+                  |
                  +-NEW-+-----+-+----+-correlation-name-+
                  |                                     |
                  |           +-AS-+                    |
                  +-OLD TABLE-+----+---table-identifier-+ 
                  |                                     |
                  |           +-AS-+                    |
                  +-NEW TABLE-+----+---table-identifier-+

   +--FOR EACH STATEMENT--+  +--MODE DB2SQL--+         
   |                      |  |               |
>--+----------------------+--+---------------+---------------->
   |                      |  |               |
   +--FOR EACH ROW--------+  +--MODE DB2ROW--+

>--+-----------------------------------+---------------------->
   |                                   |
   +--SET OPTION---option-statement----+  

>--+---------------------------------------------------+------>
   |                                                   |
   +--WHEN--(--trigger-body-execution-criteria------)--+  

>--SQL-trigger-body------------------------------------------><


Detailed SQL Syntax for Drop Trigger Statement

>>--DROP TRIGGER----trigger-name------------------------------><

When SQL triggers are created, they have an implicit attribute of ALWREPCHG(*YES). This attribute must be explicitly specified when using external triggers; otherwise, it defaults to ALWREPCHG(*NO).

SQL Trigger Examples


Column-Level Trigger with Simple Trigger Body

CREATE  TRIGGER  empsal
     BEFORE  UPDATE  OF  salary  ON  emp
     REFERENCING  NEW  AS  new  OLD  AS  old
     FOR  EACH  ROW  MODE  DB2ROW
     WHEN  (new.salary  >  1.5  *  old.salary)
          SET  new.salary  =  1.5  *  old.salary;

The SQL trigger created in this example is called empsal and is fired before the update of a row in the table or file called salary. Transition variables called new and old have been defined for new row (the after image) and old row (the before image) and will be used to qualify field names referenced in the trigger body. The trigger is a row-level trigger, and its mode is DB2ROW.

The SQL statement in the trigger body will be executed when new.salary equals 1.5 times the old.salary. When this criteria is satisfied, the new.salary is set to 1.5 times the old salary. Note that this is the preferred method for modifying or changing data before it is actually written to a table or file.

Row-Level Trigger with Complex Trigger Body

CREATE  TRIGGER  big_spenders  
     AFTER  INSERT  ON  expenses
     REFERENCING  NEW  ROW  AS  n
     FOR  EACH  ROW
     MODE  DB2ROW
     WHEN  (n.totalamount  >  10000)
BEGIN   
     DECLARE  emplname  CHAR(30);
     SET  emplname  =  
          (SELECT  lname  FROM  employee
               WHERE  empid  =  n.empno);
     INSERT  INTO  travel_audit 
          VALUES(n.empno,  emplname,  n.deptno,  
               n.totalamount,  n.enddate);
END

The SQL trigger created in this example is called big_spenders and is fired after the insert of a row in the table or file called expenses. A transition variable called n has been defined for new row (the after image) and will be used to qualify field names referenced in the trigger body. The trigger is a row-level trigger, and its mode is DB2ROW.

Note that there are multiple SQL statements in the trigger body and that they are delineated with a BEGIN and END statement. Also note that each SQL statement is ended with a semicolon. These SQL statements in the trigger body will be executed when n.totalamount is greater than 10,000. When this criteria is satisfied, the SLQ statements in the trigger body are executed, and the result is that a row is inserted into the table called travel_audit, which contains the columns or fields called n.empno, emplname, n.deptno, n.totalamount, and n.enddate.

New Trigger Enhancements

IBM has provided significant trigger enhancements in V5R1.The maximum number of triggers per file has been raised from 6 to 300. To support 300 triggers per file, triggers are now given a name, which must be unique within a library, when they are created. The new CHGPFTRG command allows you to easily disable/enable a single trigger or a group of related triggers, and the System Catalog has had four new files added to it to log and store trigger information.

SQL triggers provide a new level of function and use one or more SQL statements (instead of a user-provided/written program) within the trigger body to perform the desired action when the trigger fires. SQL trigger support in V5R1 provides an industry-standard method for defining and managing triggers that has a high degree of portability to other database management systems. SQL triggers also provide more granularity and function than external triggers, with column or field-level triggers, row or record-level triggers, and statement-level (SQL) triggers.

Lastly, there is the new read trigger, which is not intended for general use, so you must approach it with extreme caution. Use of read triggers can have a severe detrimental impact on System i and iSeries performance and throughput.

The V5R1 trigger enhancements offer additional options for fulfilling application requirements. With proper use, these enhancements can be very beneficial to programmers.

Skip Marchesani retired from IBM after 30 years and is now a consultant with Custom Systems Corp. and one of the founding partners of System i Developer. Skip spent much of his IBM career working with the Rochester Development Lab on projects for S/38 and AS/400 and was involved with the development of the AS/400. He was part of the team that taught early AS/400 education to customers and IBM lab sites worldwide.

Skip is recognized as an industry expert on DB2 UDB (aka DB2/400) and author of the book DB2/400: The New AS/400 Database. He specializes in providing customized education for any area of the System i, iSeries, and AS/400; does database design and design reviews; and performs general System i, iSeries, and AS/400 consulting for interested clients. He has been a speaker for user groups, technical conferences, and System i, iSeries, and AS/400 audiences around the world. He is an award-winning COMMON speaker and has received its Distinguished Service Award.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: